2011-12-16

SICP Exercise 2.60: Duplicate Sets

We specified that a set would be represented as a list with no duplicates. Now suppose we allow duplicates. For instance, the set {1,2,3} could be represented as the list (2 3 2 1 3 2 2). Design procedures element-of-set?, adjoin-set, union-set, and intersection-set that operate on this representation. How does the efficiency of each compare with the corresponding procedure for the non-duplicate representation? Are there applications for which you would use this representation in preference to the non-duplicate one?

We're still using a list as our representation here. We've just removed the restriction on duplicates. This means that those set operations which create a new set by taking an existing set and potentially adding new items to it no longer need to check if those items are present before adding them. This means adjoin-set and union-set no longer need to perform those checks. We'll look at those set operations in detail in a minute, but let's first look at the other two set operations: element-of-set and intersection-set.

The implementation of element-of-set given in the book simply iterates through the list representation of the set and returns true as soon as it encounters a matching element, or false if there is no such matching element. Allowing duplicates in the list representation doesn't change this approach. We still need to scan through the list to see if the element is present. It's just that, where duplicates exist, we may compare against the same value multiple times. So element-of-set can be used as is - it may run slower though, as the presence of duplicates means that the lists we're scanning may be longer than their equivalents when duplicates are not allowed.

We can see that the same holds true for intersection-set. The implementation in the book iterates through the list representation of set1 and, for each element, adds it to the result set iff it's also present in set2. If there are duplicates in either set1 or set2 this doesn't really change this approach. We still need to examine each item in set1 and only include it in the result set if it's also in set2, so we can reuse the existing implementation of intersection-set. Note that the result set could contain duplicates: any element that appears in both sets will appear as many times as it is present in set1. As with element-of-set this means that it may also run slower and may generate larger sets in comparison with using equivalent sets where duplicates are not allowed.

Now onto adjoin-set... In the "no duplicates" implementation provided in the book, adjoin-set checks to see whether the element is already in the representation before appending it. As we're allowed duplicates in our representation we don't need this test - we can just cons it onto the head. This gives us a very simple implementation:
(define (adjoin-set x set)
  (cons x set))
...or, to put it more succinctly...
(define adjoin-set cons)
As this simply puts the item on the head of the list this makes it an Θ(1) operation, and so is much more efficient than the "no duplicates" implementation (which is Θ(n)). Note that if we wanted to be slightly smart about it, still allow duplicates generally, and still retain the Θ(1) efficiency, we could simply check the head of the list to ensure that we're not putting another identical element onto the head of the list:
(define (adjoin-set x set)
  (if (or (null? set) (not (equal? x (car set))))
      (cons x set)
      set))
However, given that the example representation of the set {1,2,3} ((2 3 2 1 3 2 2)) includes two '2' values adjacent to each other, this isn't a necessary optimization.

The last operation, union-set needs to produce the set of all values in both set1 and set2. As we no longer need to worry about duplicates we can simply append the two sets together to produce the result we need:
(define (union-set set1 set2)
  (append set1 set2))
...or, more succinctly...
(define union-set append)
Okay, so let's build some sets:
> (define evens
    (adjoin-set 0 (adjoin-set 2 (adjoin-set 4 (adjoin-set 6 (adjoin-set 8 '()))))))
> (define odds
    (adjoin-set 1 (adjoin-set 3 (adjoin-set 5 (adjoin-set 7 (adjoin-set 9 '()))))))
> evens
'(0 2 4 6 8)
> odds
'(1 3 5 7 9)
> (adjoin-set 2 evens)
'(2 0 2 4 6 8)
> (adjoin-set 2 odds)
'(2 1 3 5 7 9)
> (intersection-set evens odds)
'()
> (intersection-set evens evens)
'(0 2 4 6 8)
> (union-set evens odds)
'(0 2 4 6 8 1 3 5 7 9)
> (union-set evens evens)
'(0 2 4 6 8 0 2 4 6 8)
Let's compare efficiencies of implementations:

OperationNo DuplicatesAllow Duplicates
element-of-set?Θ(n)Θ(n)
adjoin-setΘ(n)Θ(1)
intersection-setΘ(n2)Θ(n2)
union-setΘ(n2)Θ(n)

Of course we need to be slightly careful in the comparison here... While element-of-set? is Θ(n) regardless of whether or not we allow duplicates, the n here is the number of elements in the list representation of the set, not the number of distinct elements in the set. As a result, n could be much larger in the "allow duplicates" case than in the "no duplicates" case, and so the operation could be much slower. A similar issue arises for intersection-set too (except we're dealing with Θ(n2) here, so the effect can be much more exacerbated).

We also need to be aware of this issue when comparing the two union-set efficiencies. While the "allow duplicates" case is definitely more efficient (at Θ(n) as opposed to the no duplicates efficiency of Θ(n2)), the value of n in the "allow duplicates" case could potentially be much higher. For example, consider the set {1, 2, 3}. In the "no duplicates" case the size of the underlying list we have to process (and so the n we are dealing with) will always be 3. However, in the "allow duplicates" case all we know for sure is that it will be at least 3 - but (theoretically) there's no upper bound on the size of the underlying list we have to process, so with certain list representations we may find that the Θ(n) operation in the "allow duplicates" case may still perform worse than the Θ(n2) operation in the "no duplicates case.

With adjoin-set there is no such issue. The "allow duplicates" case will take constant time regardless of the size of the underlying representation (or at least we're assuming that this is how cons works). As a result it doesn't matter that there may be duplicates in the set - this has no effect on the efficiency of the operation, and so the "allow duplicates" case will generally be quicker than the "no duplicates" case.

So when would we use this representation in preference to the non-duplicate one? Well, in applications where we're going to use adjoin-set much more frequently than any of the other operations (and where memory is not a concern) it may be preferable to use the implementation that allows duplicates.

No comments:

Post a Comment