From 599628fa38e7ffaa576c7da0afbe2249edb8c2c9 Mon Sep 17 00:00:00 2001
From: Maarten ter Huurne [Note: this document is formatted similarly to the SGI STL
-implementation documentation pages, and refers to concepts and classes
-defined there. However, neither this document nor the code it
-describes is associated with SGI, nor is it necessary to have SGI's
-STL implementation installed in order to use this class.] dense_hash_map is a Hashed
-Associative Container that associates objects of type Key
-with objects of type Data. dense_hash_map is a Pair
-Associative Container, meaning that its value type is pair<const Key, Data>. It is also a
-Unique
-Associative Container, meaning that no two elements have keys that
-compare equal using EqualKey. Looking up an element in a dense_hash_map by its key is
-efficient, so dense_hash_map is useful for "dictionaries"
-where the order of elements is irrelevant. If it is important for the
-elements to be in a particular order, however, then map is more appropriate. dense_hash_map is distinguished from other hash-map
-implementations by its speed and by the ability to save
-and restore contents to disk. On the other hand, this hash-map
-implementation can use significantly more space than other hash-map
-implementations, and it also has requirements -- for instance, for a
-distinguished "empty key" -- that may not be easy for all
-applications to satisfy. This class is appropriate for applications that need speedy access
-to relatively small "dictionaries" stored in memory, or for
-applications that need these dictionaries to be persistent. [implementation note]) [1]
-
-dense_hash_map::iterator is not a mutable iterator, because
-dense_hash_map::value_type is not Assignable.
-That is, if i is of type dense_hash_map::iterator
-and p is of type dense_hash_map::value_type, then
-*i = p is not a valid expression. However,
-dense_hash_map::iterator isn't a constant iterator either,
-because it can be used to modify the object that it points to. Using
-the same notation as above, (*i).second = p is a valid
-expression. [2]
-
-This member function relies on member template functions, which
-may not be supported by all compilers. If your compiler supports
-member templates, you can call this function with any type of input
-iterator. If your compiler does not yet support member templates,
-though, then the arguments must either be of type const
-value_type* or of type dense_hash_map::const_iterator. [3]
-
-Since operator[] might insert a new element into the
-dense_hash_map, it can't possibly be a const member
-function. Note that the definition of operator[] is
-extremely simple: m[k] is equivalent to
-(*((m.insert(value_type(k, data_type()))).first)).second.
-Strictly speaking, this member function is unnecessary: it exists only
-for convenience. [4]
-
-In order to preserve iterators, erasing hashtable elements does not
-cause a hashtable to resize. This means that after a string of
-erase() calls, the hashtable will use more space than is
-required. At a cost of invalidating all current iterators, you can
-call resize() to manually compact the hashtable. The
-hashtable promotes too-small resize() arguments to the
-smallest legal value, so to compact a hashtable, it's sufficient to
-call resize(0). [5]
-
-Unlike some other hashtable implementations, the optional n in
-the calls to the constructor, resize, and rehash
-indicates not the desired number of buckets that
-should be allocated, but instead the expected number of items to be
-inserted. The class then sizes the hash-map appropriately for the
-number of items specified. It's not an error to actually insert more
-or fewer items into the hashtable, but the implementation is most
-efficient -- does the fewest hashtable resizes -- if the number of
-inserted items is n or slightly less. [6]
-
-dense_hash_map requires you call
-set_empty_key() immediately after constructing the hash-map,
-and before calling any other dense_hash_map method. (This is
-the largest difference between the dense_hash_map API and
-other hash-map APIs. See implementation.html
-for why this is necessary.)
-The argument to set_empty_key() should be a key-value that
-is never used for legitimate hash-map entries. If you have no such
-key value, you will be unable to use dense_hash_map. It is
-an error to call insert() with an item whose key is the
-"empty key."dense_hash_map<Key, Data, HashFcn, EqualKey, Alloc>
-
-Example
-
-(Note: this example uses SGI semantics for hash<>
--- the kind used by gcc and most Unix compiler suites -- and not
-Dinkumware semantics -- the kind used by Microsoft Visual Studio. If
-you are using MSVC, this example will not compile as-is: you'll need
-to change hash
to hash_compare
, and you
-won't use eqstr
at all. See the MSVC documentation for
-hash_map
and hash_compare
, for more
-details.)
-
-
-#include <iostream>
-#include <google/dense_hash_map>
-
-using google::dense_hash_map; // namespace where class lives by default
-using std::cout;
-using std::endl;
-using ext::hash; // or __gnu_cxx::hash, or maybe tr1::hash, depending on your OS
-
-struct eqstr
-{
- bool operator()(const char* s1, const char* s2) const
- {
- return (s1 == s2) || (s1 && s2 && strcmp(s1, s2) == 0);
- }
-};
-
-int main()
-{
- dense_hash_map<const char*, int, hash<const char*>, eqstr> months;
-
- months.set_empty_key(NULL);
- months["january"] = 31;
- months["february"] = 28;
- months["march"] = 31;
- months["april"] = 30;
- months["may"] = 31;
- months["june"] = 30;
- months["july"] = 31;
- months["august"] = 31;
- months["september"] = 30;
- months["october"] = 31;
- months["november"] = 30;
- months["december"] = 31;
-
- cout << "september -> " << months["september"] << endl;
- cout << "april -> " << months["april"] << endl;
- cout << "june -> " << months["june"] << endl;
- cout << "november -> " << months["november"] << endl;
-}
-
-
-
-Definition
-
-Defined in the header dense_hash_map.
-This class is not part of the C++ standard, though it is mostly
-compatible with the tr1 class unordered_map
.
-
-
-Template parameters
-
-
-
-
-
-
-
-Parameter Description Default
-
-
-
- Key
-
-
- The hash_map's key type. This is also defined as
- dense_hash_map::key_type.
-
-
-
-
-
-
-
-
- Data
-
-
- The hash_map's data type. This is also defined as
- dense_hash_map::data_type. [7]
-
-
-
-
-
-
-
-
- HashFcn
-
-
- The hash function used by the
- hash_map. This is also defined as dense_hash_map::hasher.
-
-
Note: Hashtable performance depends heavliy on the choice of
- hash function. See the performance
- page for more information.
-
- hash<Key>
-
-
-
-
-
- EqualKey
-
-
- The hash_map key equality function: a binary predicate that determines
- whether two keys are equal. This is also defined as
- dense_hash_map::key_equal.
-
-
- equal_to<Key>
-
-
-
-
-
- Alloc
-
-
- Ignored; this is included only for API-compatibility
- with SGI's (and tr1's) STL implementation.
-
-
-
-Model of
-
-Unique Hashed Associative Container,
-Pair Associative Container
-
-
-Type requirements
-
-
-
-
-
-Public base classes
-
-None.
-
-
-Members
-
-
-
-
-
-
-
-Member Where defined Description
-
-
-
- key_type
-
-
- Associative
- Container
-
-
- The dense_hash_map's key type, Key.
-
-
-
-
-
- data_type
-
-
- Pair
- Associative Container
-
-
- The type of object associated with the keys.
-
-
-
-
-
- value_type
-
-
- Pair
- Associative Container
-
-
- The type of object, pair<const key_type, data_type>,
- stored in the hash_map.
-
-
-
-
-
- hasher
-
-
- Hashed
- Associative Container
-
-
- The dense_hash_map's hash
- function.
-
-
-
-
-
- key_equal
-
-
- Hashed
- Associative Container
-
-
- Function
- object that compares keys for equality.
-
-
-
-
-
- allocator_type
-
-
- Unordered Associative Container (tr1)
-
-
- The type of the Allocator given as a template parameter.
-
-
-
-
-
- pointer
-
-
- Container
-
-
- Pointer to T.
-
-
-
-
-
- reference
-
-
- Container
-
-
- Reference to T
-
-
-
-
-
- const_reference
-
-
- Container
-
-
- Const reference to T
-
-
-
-
-
- size_type
-
-
- Container
-
-
- An unsigned integral type.
-
-
-
-
-
- difference_type
-
-
- Container
-
-
- A signed integral type.
-
-
-
-
-
- iterator
-
-
- Container
-
-
- Iterator used to iterate through a dense_hash_map. [1]
-
-
-
-
-
- const_iterator
-
-
- Container
-
-
- Const iterator used to iterate through a dense_hash_map.
-
-
-
-
-
- local_iterator
-
-
- Unordered Associative Container (tr1)
-
-
- Iterator used to iterate through a subset of
- dense_hash_map. [1]
-
-
-
-
-
- const_local_iterator
-
-
- Unordered Associative Container (tr1)
-
-
- Const iterator used to iterate through a subset of
- dense_hash_map.
-
-
-
-
-
- iterator begin()
-
-
- Container
-
-
- Returns an iterator pointing to the beginning of the
- dense_hash_map.
-
-
-
-
-
- iterator end()
-
-
- Container
-
-
- Returns an iterator pointing to the end of the
- dense_hash_map.
-
-
-
-
-
- const_iterator begin() const
-
-
- Container
-
-
- Returns an const_iterator pointing to the beginning of the
- dense_hash_map.
-
-
-
-
-
- const_iterator end() const
-
-
- Container
-
-
- Returns an const_iterator pointing to the end of the
- dense_hash_map.
-
-
-
-
-
- local_iterator begin(size_type i)
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a local_iterator pointing to the beginning of bucket
- i in the dense_hash_map.
-
-
-
-
-
- local_iterator end(size_type i)
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a local_iterator pointing to the end of bucket
- i in the dense_hash_map. For
- dense_hash_map, each bucket contains either 0 or 1 item.
-
-
-
-
-
- const_local_iterator begin(size_type i) const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a const_local_iterator pointing to the beginning of bucket
- i in the dense_hash_map.
-
-
-
-
-
- const_local_iterator end(size_type i) const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a const_local_iterator pointing to the end of bucket
- i in the dense_hash_map. For
- dense_hash_map, each bucket contains either 0 or 1 item.
-
-
-
-
-
- size_type size() const
-
-
- Container
-
-
- Returns the size of the dense_hash_map.
-
-
-
-
-
- size_type max_size() const
-
-
- Container
-
-
- Returns the largest possible size of the dense_hash_map.
-
-
-
-
-
- bool empty() const
-
-
- Container
-
-
- true if the dense_hash_map's size is 0.
-
-
-
-
-
- size_type bucket_count() const
-
-
- Hashed
- Associative Container
-
-
- Returns the number of buckets used by the dense_hash_map.
-
-
-
-
-
- size_type max_bucket_count() const
-
-
- Hashed
- Associative Container
-
-
- Returns the largest possible number of buckets used by the dense_hash_map.
-
-
-
-
-
- size_type bucket_size(size_type i) const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns the number of elements in bucket i. For
- dense_hash_map, this will be either 0 or 1.
-
-
-
-
-
- size_type bucket(const key_type& key) const
-
-
- Unordered Associative Container (tr1)
-
-
- If the key exists in the map, returns the index of the bucket
- containing the given key, otherwise, return the bucket the key
- would be inserted into.
- This value may be passed to begin(size_type) and
- end(size_type).
-
-
-
-
-
- float load_factor() const
-
-
- Unordered Associative Container (tr1)
-
-
- The number of elements in the dense_hash_map divided by
- the number of buckets.
-
-
-
-
-
- float max_load_factor() const
-
-
- Unordered Associative Container (tr1)
-
-
- The maximum load factor before increasing the number of buckets in
- the dense_hash_map.
-
-
-
-
-
- void max_load_factor(float new_grow)
-
-
- Unordered Associative Container (tr1)
-
-
- Sets the maximum load factor before increasing the number of
- buckets in the dense_hash_map.
-
-
-
-
-
- float min_load_factor() const
-
-
- dense_hash_map
-
-
- The minimum load factor before decreasing the number of buckets in
- the dense_hash_map.
-
-
-
-
-
- void min_load_factor(float new_grow)
-
-
- dense_hash_map
-
-
- Sets the minimum load factor before decreasing the number of
- buckets in the dense_hash_map.
-
-
-
-
-
- void set_resizing_parameters(float shrink, float grow)
-
-
- dense_hash_map
-
-
- DEPRECATED. See below.
-
-
-
-
-
- void resize(size_type n)
-
-
- Hashed
- Associative Container
-
-
- Increases the bucket count to hold at least n items.
- [4] [5]
-
-
-
-
-
- void rehash(size_type n)
-
-
- Unordered Associative Container (tr1)
-
-
- Increases the bucket count to hold at least n items.
- This is identical to resize.
- [4] [5]
-
-
-
-
-
- hasher hash_funct() const
-
-
- Hashed
- Associative Container
-
-
- Returns the hasher object used by the dense_hash_map.
-
-
-
-
-
- hasher hash_function() const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns the hasher object used by the dense_hash_map.
- This is idential to hash_funct.
-
-
-
-
-
- key_equal key_eq() const
-
-
- Hashed
- Associative Container
-
-
- Returns the key_equal object used by the
- dense_hash_map.
-
-
-
-
-
- dense_hash_map()
-
-
- Container
-
-
- Creates an empty dense_hash_map.
-
-
-
-
-
- dense_hash_map(size_type n)
-
-
- Hashed
- Associative Container
-
-
- Creates an empty dense_hash_map that's optimized for holding
- up to n items.
- [5]
-
-
-
-
-
- dense_hash_map(size_type n, const hasher& h)
-
-
- Hashed
- Associative Container
-
-
- Creates an empty dense_hash_map that's optimized for up
- to n items, using h as the hash function.
-
-
-
-
-
- dense_hash_map(size_type n, const hasher& h, const
- key_equal& k)
-
-
- Hashed
- Associative Container
-
-
- Creates an empty dense_hash_map that's optimized for up
- to n items, using h as the hash function and
- k as the key equal function.
-
-
-
-
-
-
-template <class InputIterator>
-dense_hash_map(InputIterator f, InputIterator l)
-[2]
-
- Unique
- Hashed Associative Container
-
-
- Creates a dense_hash_map with a copy of a range.
-
-
-
-
-
-
-template <class InputIterator>
-dense_hash_map(InputIterator f, InputIterator l, size_type n)
-[2]
-
- Unique
- Hashed Associative Container
-
-
- Creates a hash_map with a copy of a range that's optimized to
- hold up to n items.
-
-
-
-
-
-
-template <class InputIterator>
-dense_hash_map(InputIterator f, InputIterator l, size_type n, const
-hasher& h)
[2]
-
- Unique
- Hashed Associative Container
-
-
- Creates a hash_map with a copy of a range that's optimized to hold
- up to n items, using h as the hash function.
-
-
-
-
-
-
-template <class InputIterator>
-dense_hash_map(InputIterator f, InputIterator l, size_type n, const
-hasher& h, const key_equal& k)
[2]
-
- Unique
- Hashed Associative Container
-
-
- Creates a hash_map with a copy of a range that's optimized for
- holding up to n items, using h as the hash
- function and k as the key equal function.
-
-
-
-
-
- dense_hash_map(const hash_map&)
-
-
- Container
-
-
- The copy constructor.
-
-
-
-
-
- dense_hash_map& operator=(const hash_map&)
-
-
- Container
-
-
- The assignment operator
-
-
-
-
-
- void swap(hash_map&)
-
-
- Container
-
-
- Swaps the contents of two hash_maps.
-
-
-
-
-
-
-pair<iterator, bool> insert(const value_type& x)
-
-
- Unique
- Associative Container
-
-
- Inserts x into the dense_hash_map.
-
-
-
-
-
-
-template <class InputIterator>
-void insert(InputIterator f, InputIterator l)
[2]
-
- Unique
- Associative Container
-
-
- Inserts a range into the dense_hash_map.
-
-
-
-
-
- void set_empty_key(const key_type& key) [6]
-
-
- dense_hash_map
-
-
- See below.
-
-
-
-
-
- void set_deleted_key(const key_type& key) [6]
-
-
- dense_hash_map
-
-
- See below.
-
-
-
-
-
- void clear_deleted_key() [6]
-
-
- dense_hash_map
-
-
- See below.
-
-
-
-
-
- void erase(iterator pos)
-
-
- Associative
- Container
-
-
- Erases the element pointed to by pos.
- [6]
-
-
-
-
-
- size_type erase(const key_type& k)
-
-
- Associative
- Container
-
-
- Erases the element whose key is k.
- [6]
-
-
-
-
-
- void erase(iterator first, iterator last)
-
-
- Associative
- Container
-
-
- Erases all elements in a range.
- [6]
-
-
-
-
-
- void clear()
-
-
- Associative
- Container
-
-
- Erases all of the elements.
-
-
-
-
-
- void clear_no_resize()
-
-
- dense_hash_map
-
-
- See below.
-
-
-
-
-
- const_iterator find(const key_type& k) const
-
-
- Associative
- Container
-
-
- Finds an element whose key is k.
-
-
-
-
-
- iterator find(const key_type& k)
-
-
- Associative
- Container
-
-
- Finds an element whose key is k.
-
-
-
-
-
- size_type count(const key_type& k) const
-
-
- Unique
- Associative Container
-
-
- Counts the number of elements whose key is k.
-
-
-
-
-
-
-pair<const_iterator, const_iterator> equal_range(const
-key_type& k) const
-
- Associative
- Container
-
-
- Finds a range containing all elements whose key is k.
-
-
-
-
-
-
-pair<iterator, iterator> equal_range(const
-key_type& k)
-
- Associative
- Container
-
-
- Finds a range containing all elements whose key is k.
-
-
-
-
-
-
-data_type& operator[](const key_type& k) [3]
-
- dense_hash_map
-
-
- See below.
-
-
-
-
-
- bool write_metadata(FILE *fp)
-
-
- dense_hash_map
-
-
- See below.
-
-
-
-
-
- bool read_metadata(FILE *fp)
-
-
- dense_hash_map
-
-
- See below.
-
-
-
-
-
- bool write_nopointer_data(FILE *fp)
-
-
- dense_hash_map
-
-
- See below.
-
-
-
-
-
- bool read_nopointer_data(FILE *fp)
-
-
- dense_hash_map
-
-
- See below.
-
-
-
-
-
-
-bool operator==(const hash_map&, const hash_map&)
-
-
- Hashed
- Associative Container
-
-
- Tests two hash_maps for equality. This is a global function, not a
- member function.
-
-New members
-
-These members are not defined in the Unique
-Hashed Associative Container, Pair
-Associative Container, or tr1's +Unordered Associative
-Container requirements, but are specific to
-dense_hash_map.
-
-
-
-
-
-
-
-Member Description
-
-
-
- void set_empty_key(const key_type& key)
-
-
- Sets the distinguished "empty" key to key. This must be
- called immediately after construct time, before calls to another
- other dense_hash_map operation. [6]
-
-
-
-
-
- void set_deleted_key(const key_type& key)
-
-
- Sets the distinguished "deleted" key to key. This must be
- called before any calls to erase(). [6]
-
-
-
-
-
- void clear_deleted_key()
-
-
- Clears the distinguished "deleted" key. After this is called,
- calls to erase() are not valid on this object.
- [6]
-
-
-
-
-
- void clear_no_resize()
-
-
- Clears the hashtable like clear() does, but does not
- recover the memory used for hashtable buckets. (The memory
- used by the items in the hashtable is still recovered.)
- This can save time for applications that want to reuse a
- dense_hash_map many times, each time with a similar number
- of objects.
-
-
-
-
-
-
-
-data_type&
-operator[](const key_type& k) [3]
-
-
- Returns a reference to the object that is associated with
- a particular key. If the dense_hash_map does not already
- contain such an object, operator[] inserts the default
- object data_type(). [3]
-
-
- void set_resizing_parameters(float shrink, float grow)
-
-
- This function is DEPRECATED. It is equivalent to calling
- min_load_factor(shrink); max_load_factor(grow).
-
-
-
-
-
-
-
- bool write_metadata(FILE *fp)
-
-
- Write hashtable metadata to fp. See below.
-
-
-
-
-
- bool read_metadata(FILE *fp)
-
-
- Read hashtable metadata from fp. See below.
-
-
-
-
-
- bool write_nopointer_data(FILE *fp)
-
-
- Write hashtable contents to fp. This is valid only if the
- hashtable key and value are "plain" data. See below.
-
-
-
-
-
- bool read_nopointer_data(FILE *fp)
-
-
- Read hashtable contents to fp. This is valid only if the
- hashtable key and value are "plain" data. See below.
-
-Notes
-
-
There is no need to call set_deleted_key if you do not -wish to call erase() on the hash-map.
- -It is acceptable to change the deleted-key at any time by calling -set_deleted_key() with a new argument. You can also call -clear_deleted_key(), at which point all keys become valid for -insertion but no hashtable entries can be deleted until -set_deleted_key() is called again.
- -[7] - -dense_hash_map requires that data_type has a -zero-argument default constructor. This is because -dense_hash_map uses the special value pair(empty_key, -data_type()) to denote empty buckets, and thus needs to be able -to create data_type using a zero-argument constructor.
- -If your data_type does not have a zero-argument default -constructor, there are several workarounds:
-
- IMPORTANT IMPLEMENTATION NOTE: In the current version of -this code, the input/output routines for dense_hash_map have -not yet been implemented. This section explains the API, but -note that all calls to these routines will fail (return -false). It is a TODO to remedy this situation. - |
It is possible to save and restore dense_hash_map objects -to disk. Storage takes place in two steps. The first writes the -hashtable metadata. The second writes the actual data.
- -To write a hashtable to disk, first call write_metadata() -on an open file pointer. This saves the hashtable information in a -byte-order-independent format.
- -After the metadata has been written to disk, you must write the -actual data stored in the hash-map to disk. If both the key and data -are "simple" enough, you can do this by calling -write_nopointer_data(). "Simple" data is data that can be -safely copied to disk via fwrite(). Native C data types fall -into this category, as do structs of native C data types. Pointers -and STL objects do not.
- -Note that write_nopointer_data() does not do any endian -conversion. Thus, it is only appropriate when you intend to read the -data on the same endian architecture as you write the data.
- -If you cannot use write_nopointer_data() for any reason, -you can write the data yourself by iterating over the -dense_hash_map with a const_iterator and writing -the key and data in any manner you wish.
- -To read the hashtable information from disk, first you must create -a dense_hash_map object. Then open a file pointer to point -to the saved hashtable, and call read_metadata(). If you -saved the data via write_nopointer_data(), you can follow the -read_metadata() call with a call to -read_nopointer_data(). This is all that is needed.
- -If you saved the data through a custom write routine, you must call -a custom read routine to read in the data. To do this, iterate over -the dense_hash_map with an iterator; this operation -is sensical because the metadata has already been set up. For each -iterator item, you can read the key and value from disk, and set it -appropriately. You will need to do a const_cast on the -iterator, since it->first is always const. You -will also need to use placement-new if the key or value is a C++ -object. The code might look like this:
-- for (dense_hash_map<int*, ComplicatedClass>::iterator it = ht.begin(); - it != ht.end(); ++it) { - // The key is stored in the dense_hash_map as a pointer - const_cast<int*>(it->first) = new int; - fread(const_cast<int*>(it->first), sizeof(int), 1, fp); - // The value is a complicated C++ class that takes an int to construct - int ctor_arg; - fread(&ctor_arg, sizeof(int), 1, fp); - new (&it->second) ComplicatedClass(ctor_arg); // "placement new" - } -- - -
erase() is guaranteed not to invalidate any iterators -- -except for any iterators pointing to the item being erased, of course. -insert() invalidates all iterators, as does -resize().
- -This is implemented by making erase() not resize the -hashtable. If you desire maximum space efficiency, you can call -resize(0) after a string of erase() calls, to force -the hashtable to resize to the smallest possible size.
- -In addition to invalidating iterators, insert() -and resize() invalidate all pointers into the hashtable. If -you want to store a pointer to an object held in a dense_hash_map, -either do so after finishing hashtable inserts, or store the object on -the heap and a pointer to it in the dense_hash_map.
- - -The following are SGI STL, and some Google STL, concepts and -classes related to dense_hash_map.
- -hash_map, -Associative Container, -Hashed Associative Container, -Pair Associative Container, -Unique Hashed Associative Container, -set, -map -multiset, -multimap, -hash_set, -hash_multiset, -hash_multimap, -sparse_hash_map, -sparse_hash_set, -dense_hash_set - - - diff --git a/src/sparsehash-1.6/doc/dense_hash_set.html b/src/sparsehash-1.6/doc/dense_hash_set.html deleted file mode 100644 index 2a5ff2e..0000000 --- a/src/sparsehash-1.6/doc/dense_hash_set.html +++ /dev/null @@ -1,1445 +0,0 @@ - - - - - -[Note: this document is formatted similarly to the SGI STL -implementation documentation pages, and refers to concepts and classes -defined there. However, neither this document nor the code it -describes is associated with SGI, nor is it necessary to have SGI's -STL implementation installed in order to use this class.]
- - -dense_hash_set is a Hashed -Associative Container that stores objects of type Key. -dense_hash_set is a Simple -Associative Container, meaning that its value type, as well as its -key type, is key. It is also a -Unique -Associative Container, meaning that no two elements have keys that -compare equal using EqualKey.
- -Looking up an element in a dense_hash_set by its key is -efficient, so dense_hash_set is useful for "dictionaries" -where the order of elements is irrelevant. If it is important for the -elements to be in a particular order, however, then map is more appropriate.
- -dense_hash_set is distinguished from other hash-set -implementations by its speed and by the ability to save -and restore contents to disk. On the other hand, this hash-set -implementation can use significantly more space than other hash-set -implementations, and it also has requirements -- for instance, for a -distinguished "empty key" -- that may not be easy for all -applications to satisfy.
- -This class is appropriate for applications that need speedy access -to relatively small "dictionaries" stored in memory, or for -applications that need these dictionaries to be persistent. [implementation note])
- - -hash<>
--- the kind used by gcc and most Unix compiler suites -- and not
-Dinkumware semantics -- the kind used by Microsoft Visual Studio. If
-you are using MSVC, this example will not compile as-is: you'll need
-to change hash
to hash_compare
, and you
-won't use eqstr
at all. See the MSVC documentation for
-hash_map
and hash_compare
, for more
-details.)
-
--#include <iostream> -#include <google/dense_hash_set> - -using google::dense_hash_set; // namespace where class lives by default -using std::cout; -using std::endl; -using ext::hash; // or __gnu_cxx::hash, or maybe tr1::hash, depending on your OS - -struct eqstr -{ - bool operator()(const char* s1, const char* s2) const - { - return (s1 == s2) || (s1 && s2 && strcmp(s1, s2) == 0); - } -}; - -void lookup(const hash_set<const char*, hash<const char*>, eqstr>& Set, - const char* word) -{ - dense_hash_set<const char*, hash<const char*>, eqstr>::const_iterator it - = Set.find(word); - cout << word << ": " - << (it != Set.end() ? "present" : "not present") - << endl; -} - -int main() -{ - dense_hash_set<const char*, hash<const char*>, eqstr> Set; - Set.set_empty_key(NULL); - Set.insert("kiwi"); - Set.insert("plum"); - Set.insert("apple"); - Set.insert("mango"); - Set.insert("apricot"); - Set.insert("banana"); - - lookup(Set, "mango"); - lookup(Set, "apple"); - lookup(Set, "durian"); -} -- - -
unordered_set
.
-
-
-Parameter | Description | Default |
---|---|---|
- Key - | -- The hash_set's key and value type. This is also defined as - dense_hash_set::key_type and - dense_hash_set::value_type. - | -- - | -
- HashFcn - | -
- The hash function used by the
- hash_set. This is also defined as dense_hash_set::hasher.
- Note: Hashtable performance depends heavliy on the choice of - hash function. See the performance - page for more information. - |
-- hash<Key> - | -
- EqualKey - | -- The hash_set key equality function: a binary predicate that determines - whether two keys are equal. This is also defined as - dense_hash_set::key_equal. - | -- equal_to<Key> - | -
- Alloc - | -- Ignored; this is included only for API-compatibility - with SGI's (and tr1's) STL implementation. - | -- | -
Member | Where defined | Description |
---|---|---|
- value_type - | -- Container - | -- The type of object, T, stored in the hash_set. - | -
- key_type - | -- Associative - Container - | -- The key type associated with value_type. - | -
- hasher - | -- Hashed - Associative Container - | -- The dense_hash_set's hash - function. - | -
- key_equal - | -- Hashed - Associative Container - | -- Function - object that compares keys for equality. - | -
- allocator_type - | -- Unordered Associative Container (tr1) - | -- The type of the Allocator given as a template parameter. - | -
- pointer - | -- Container - | -- Pointer to T. - | -
- reference - | -- Container - | -- Reference to T - | -
- const_reference - | -- Container - | -- Const reference to T - | -
- size_type - | -- Container - | -- An unsigned integral type. - | -
- difference_type - | -- Container - | -- A signed integral type. - | -
- iterator - | -- Container - | -- Iterator used to iterate through a dense_hash_set. - | -
- const_iterator - | -- Container - | -- Const iterator used to iterate through a dense_hash_set. - (iterator and const_iterator are the same type.) - | -
- local_iterator - | -- Unordered Associative Container (tr1) - | -- Iterator used to iterate through a subset of - dense_hash_set. - | -
- const_local_iterator - | -- Unordered Associative Container (tr1) - | -- Const iterator used to iterate through a subset of - dense_hash_set. - | -
- iterator begin() const - | -- Container - | -- Returns an iterator pointing to the beginning of the - dense_hash_set. - | -
- iterator end() const - | -- Container - | -- Returns an iterator pointing to the end of the - dense_hash_set. - | -
- local_iterator begin(size_type i) - | -- Unordered Associative Container (tr1) - | -- Returns a local_iterator pointing to the beginning of bucket - i in the dense_hash_set. - | -
- local_iterator end(size_type i) - | -- Unordered Associative Container (tr1) - | -- Returns a local_iterator pointing to the end of bucket - i in the dense_hash_set. For - dense_hash_set, each bucket contains either 0 or 1 item. - | -
- const_local_iterator begin(size_type i) const - | -- Unordered Associative Container (tr1) - | -- Returns a const_local_iterator pointing to the beginning of bucket - i in the dense_hash_set. - | -
- const_local_iterator end(size_type i) const - | -- Unordered Associative Container (tr1) - | -- Returns a const_local_iterator pointing to the end of bucket - i in the dense_hash_set. For - dense_hash_set, each bucket contains either 0 or 1 item. - | -
- size_type size() const - | -- Container - | -- Returns the size of the dense_hash_set. - | -
- size_type max_size() const - | -- Container - | -- Returns the largest possible size of the dense_hash_set. - | -
- bool empty() const - | -- Container - | -- true if the dense_hash_set's size is 0. - | -
- size_type bucket_count() const - | -- Hashed - Associative Container - | -- Returns the number of buckets used by the dense_hash_set. - | -
- size_type max_bucket_count() const - | -- Hashed - Associative Container - | -- Returns the largest possible number of buckets used by the dense_hash_set. - | -
- size_type bucket_size(size_type i) const - | -- Unordered Associative Container (tr1) - | -- Returns the number of elements in bucket i. For - dense_hash_set, this will be either 0 or 1. - | -
- size_type bucket(const key_type& key) const - | -- Unordered Associative Container (tr1) - | -- If the key exists in the map, returns the index of the bucket - containing the given key, otherwise, return the bucket the key - would be inserted into. - This value may be passed to begin(size_type) and - end(size_type). - | -
- float load_factor() const - | -- Unordered Associative Container (tr1) - | -- The number of elements in the dense_hash_set divided by - the number of buckets. - | -
- float max_load_factor() const - | -- Unordered Associative Container (tr1) - | -- The maximum load factor before increasing the number of buckets in - the dense_hash_set. - | -
- void max_load_factor(float new_grow) - | -- Unordered Associative Container (tr1) - | -- Sets the maximum load factor before increasing the number of - buckets in the dense_hash_set. - | -
- float min_load_factor() const - | -- dense_hash_set - | -- The minimum load factor before decreasing the number of buckets in - the dense_hash_set. - | -
- void min_load_factor(float new_grow) - | -- dense_hash_set - | -- Sets the minimum load factor before decreasing the number of - buckets in the dense_hash_set. - | -
- void set_resizing_parameters(float shrink, float grow) - | -- dense_hash_set - | -- DEPRECATED. See below. - | -
- void resize(size_type n) - | -- Hashed - Associative Container - | -- Increases the bucket count to hold at least n items. - [2] [3] - | -
- void rehash(size_type n) - | -- Unordered Associative Container (tr1) - | -- Increases the bucket count to hold at least n items. - This is identical to resize. - [2] [3] - | -
- hasher hash_funct() const - | -- Hashed - Associative Container - | -- Returns the hasher object used by the dense_hash_set. - | -
- hasher hash_function() const - | -- Unordered Associative Container (tr1) - | -- Returns the hasher object used by the dense_hash_set. - This is idential to hash_funct. - | -
- key_equal key_eq() const - | -- Hashed - Associative Container - | -- Returns the key_equal object used by the - dense_hash_set. - | -
- dense_hash_set() - | -- Container - | -- Creates an empty dense_hash_set. - | -
- dense_hash_set(size_type n) - | -- Hashed - Associative Container - | -- Creates an empty dense_hash_set that's optimized for holding - up to n items. - [3] - | -
- dense_hash_set(size_type n, const hasher& h) - | -- Hashed - Associative Container - | -- Creates an empty dense_hash_set that's optimized for up - to n items, using h as the hash function. - | -
- dense_hash_set(size_type n, const hasher& h, const - key_equal& k) - | -- Hashed - Associative Container - | -- Creates an empty dense_hash_set that's optimized for up - to n items, using h as the hash function and - k as the key equal function. - | -
- template <class InputIterator> -dense_hash_set(InputIterator f, InputIterator l)-[2] - |
-- Unique - Hashed Associative Container - | -- Creates a dense_hash_set with a copy of a range. - | -
- template <class InputIterator> -dense_hash_set(InputIterator f, InputIterator l, size_type n)-[2] - |
-- Unique - Hashed Associative Container - | -- Creates a hash_set with a copy of a range that's optimized to - hold up to n items. - | -
- template <class InputIterator> -dense_hash_set(InputIterator f, InputIterator l, size_type n, const -hasher& h)[2] - |
-- Unique - Hashed Associative Container - | -- Creates a hash_set with a copy of a range that's optimized to hold - up to n items, using h as the hash function. - | -
- template <class InputIterator> -dense_hash_set(InputIterator f, InputIterator l, size_type n, const -hasher& h, const key_equal& k)[2] - |
-- Unique - Hashed Associative Container - | -- Creates a hash_set with a copy of a range that's optimized for - holding up to n items, using h as the hash - function and k as the key equal function. - | -
- dense_hash_set(const hash_set&) - | -- Container - | -- The copy constructor. - | -
- dense_hash_set& operator=(const hash_set&) - | -- Container - | -- The assignment operator - | -
- void swap(hash_set&) - | -- Container - | -- Swaps the contents of two hash_sets. - | -
- pair<iterator, bool> insert(const value_type& x) -- |
-- Unique - Associative Container - | -- Inserts x into the dense_hash_set. - | -
- template <class InputIterator> -void insert(InputIterator f, InputIterator l)[2] - |
-- Unique - Associative Container - | -- Inserts a range into the dense_hash_set. - | -
- void set_empty_key(const key_type& key) [4] - | -- dense_hash_set - | -- See below. - | -
- void set_deleted_key(const key_type& key) [4] - | -- dense_hash_set - | -- See below. - | -
- void clear_deleted_key() [4] - | -- dense_hash_set - | -- See below. - | -
- void erase(iterator pos) - | -- Associative - Container - | -- Erases the element pointed to by pos. - [4] - | -
- size_type erase(const key_type& k) - | -- Associative - Container - | -- Erases the element whose key is k. - [4] - | -
- void erase(iterator first, iterator last) - | -- Associative - Container - | -- Erases all elements in a range. - [4] - | -
- void clear() - | -- Associative - Container - | -- Erases all of the elements. - | -
- void clear_no_resize() - | -- dense_hash_map - | -- See below. - | -
- iterator find(const key_type& k) const - | -- Associative - Container - | -- Finds an element whose key is k. - | -
- size_type count(const key_type& k) const - | -- Unique - Associative Container - | -- Counts the number of elements whose key is k. - | -
- pair<iterator, iterator> equal_range(const -key_type& k) const- |
-- Associative - Container - | -- Finds a range containing all elements whose key is k. - | -
- bool write_metadata(FILE *fp) - | -- dense_hash_set - | -- See below. - | -
- bool read_metadata(FILE *fp) - | -- dense_hash_set - | -- See below. - | -
- bool write_nopointer_data(FILE *fp) - | -- dense_hash_set - | -- See below. - | -
- bool read_nopointer_data(FILE *fp) - | -- dense_hash_set - | -- See below. - | -
- bool operator==(const hash_set&, const hash_set&) -- |
-- Hashed - Associative Container - | -- Tests two hash_sets for equality. This is a global function, not a - member function. - | -
Member | Description |
---|---|
- void set_empty_key(const key_type& key) - | -- Sets the distinguished "empty" key to key. This must be - called immediately after construct time, before calls to another - other dense_hash_set operation. [4] - | -
- void set_deleted_key(const key_type& key) - | -- Sets the distinguished "deleted" key to key. This must be - called before any calls to erase(). [4] - | -
- void clear_deleted_key() - | -- Clears the distinguished "deleted" key. After this is called, - calls to erase() are not valid on this object. - [4] - | -
- void clear_no_resize() - | -- Clears the hashtable like clear() does, but does not - recover the memory used for hashtable buckets. (The memory - used by the items in the hashtable is still recovered.) - This can save time for applications that want to reuse a - dense_hash_set many times, each time with a similar number - of objects. - | -- void set_resizing_parameters(float shrink, float grow) - | -- This function is DEPRECATED. It is equivalent to calling - min_load_factor(shrink); max_load_factor(grow). - | - - -
- bool write_metadata(FILE *fp) - | -- Write hashtable metadata to fp. See below. - | -
- bool read_metadata(FILE *fp) - | -- Read hashtable metadata from fp. See below. - | -
- bool write_nopointer_data(FILE *fp) - | -- Write hashtable contents to fp. This is valid only if the - hashtable key and value are "plain" data. See below. - | -
- bool read_nopointer_data(FILE *fp) - | -- Read hashtable contents to fp. This is valid only if the - hashtable key and value are "plain" data. See below. - | -
[1] - -This member function relies on member template functions, which -may not be supported by all compilers. If your compiler supports -member templates, you can call this function with any type of input -iterator. If your compiler does not yet support member templates, -though, then the arguments must either be of type const -value_type* or of type dense_hash_set::const_iterator.
- -[2] - -In order to preserve iterators, erasing hashtable elements does not -cause a hashtable to resize. This means that after a string of -erase() calls, the hashtable will use more space than is -required. At a cost of invalidating all current iterators, you can -call resize() to manually compact the hashtable. The -hashtable promotes too-small resize() arguments to the -smallest legal value, so to compact a hashtable, it's sufficient to -call resize(0). - -
[3] - -Unlike some other hashtable implementations, the optional n in -the calls to the constructor, resize, and rehash -indicates not the desired number of buckets that -should be allocated, but instead the expected number of items to be -inserted. The class then sizes the hash-set appropriately for the -number of items specified. It's not an error to actually insert more -or fewer items into the hashtable, but the implementation is most -efficient -- does the fewest hashtable resizes -- if the number of -inserted items is n or slightly less.
- -[4] - -dense_hash_set requires you call -set_empty_key() immediately after constructing the hash-set, -and before calling any other dense_hash_set method. (This is -the largest difference between the dense_hash_set API and -other hash-set APIs. See implementation.html -for why this is necessary.) -The argument to set_empty_key() should be a key-value that -is never used for legitimate hash-set entries. If you have no such -key value, you will be unable to use dense_hash_set. It is -an error to call insert() with an item whose key is the -"empty key."
- -dense_hash_set also requires you call -set_deleted_key() before calling erase(). -The argument to set_deleted_key() should be a key-value that -is never used for legitimate hash-set entries. It must be different -from the key-value used for set_empty_key(). It is an error to call -erase() without first calling set_deleted_key(), and -it is also an error to call insert() with an item whose key -is the "deleted key." - -There is no need to call set_deleted_key if you do not -wish to call erase() on the hash-set.
- -It is acceptable to change the deleted-key at any time by calling -set_deleted_key() with a new argument. You can also call -clear_deleted_key(), at which point all keys become valid for -insertion but no hashtable entries can be deleted until -set_deleted_key() is called again.
- - -
- IMPORTANT IMPLEMENTATION NOTE: In the current version of -this code, the input/output routines for dense_hash_set have -not yet been implemented. This section explains the API, but -note that all calls to these routines will fail (return -false). It is a TODO to remedy this situation. - |
It is possible to save and restore dense_hash_set objects -to disk. Storage takes place in two steps. The first writes the -hashtable metadata. The second writes the actual data.
- -To write a hashtable to disk, first call write_metadata() -on an open file pointer. This saves the hashtable information in a -byte-order-independent format.
- -After the metadata has been written to disk, you must write the -actual data stored in the hash-set to disk. If both the key and data -are "simple" enough, you can do this by calling -write_nopointer_data(). "Simple" data is data that can be -safely copied to disk via fwrite(). Native C data types fall -into this category, as do structs of native C data types. Pointers -and STL objects do not.
- -Note that write_nopointer_data() does not do any endian -conversion. Thus, it is only appropriate when you intend to read the -data on the same endian architecture as you write the data.
- -If you cannot use write_nopointer_data() for any reason, -you can write the data yourself by iterating over the -dense_hash_set with a const_iterator and writing -the key and data in any manner you wish.
- -To read the hashtable information from disk, first you must create -a dense_hash_set object. Then open a file pointer to point -to the saved hashtable, and call read_metadata(). If you -saved the data via write_nopointer_data(), you can follow the -read_metadata() call with a call to -read_nopointer_data(). This is all that is needed.
- -If you saved the data through a custom write routine, you must call -a custom read routine to read in the data. To do this, iterate over -the dense_hash_set with an iterator; this operation -is sensical because the metadata has already been set up. For each -iterator item, you can read the key and value from disk, and set it -appropriately. You will need to do a const_cast on the -iterator, since *it is always const. The -code might look like this:
-- for (dense_hash_set<int*>::iterator it = ht.begin(); - it != ht.end(); ++it) { - const_cast<int*>(*it) = new int; - fread(const_cast<int*>(*it), sizeof(int), 1, fp); - } -- -
Here's another example, where the item stored in the hash-set is -a C++ object with a non-trivial constructor. In this case, you must -use "placement new" to construct the object at the correct memory -location.
-- for (dense_hash_set<ComplicatedClass>::iterator it = ht.begin(); - it != ht.end(); ++it) { - int ctor_arg; // ComplicatedClass takes an int as its constructor arg - fread(&ctor_arg, sizeof(int), 1, fp); - new (const_cast<ComplicatedClass*>(&(*it))) ComplicatedClass(ctor_arg); - } -- - -
erase() is guaranteed not to invalidate any iterators -- -except for any iterators pointing to the item being erased, of course. -insert() invalidates all iterators, as does -resize().
- -This is implemented by making erase() not resize the -hashtable. If you desire maximum space efficiency, you can call -resize(0) after a string of erase() calls, to force -the hashtable to resize to the smallest possible size.
- -In addition to invalidating iterators, insert() -and resize() invalidate all pointers into the hashtable. If -you want to store a pointer to an object held in a dense_hash_set, -either do so after finishing hashtable inserts, or store the object on -the heap and a pointer to it in the dense_hash_set.
- - - -The following are SGI STL, and some Google STL, concepts and -classes related to dense_hash_set.
- -hash_set, -Associative Container, -Hashed Associative Container, -Simple Associative Container, -Unique Hashed Associative Container, -set, -map -multiset, -multimap, -hash_map, -hash_multiset, -hash_multimap, -sparse_hash_set, -sparse_hash_map, -dense_hash_map - - - diff --git a/src/sparsehash-1.6/doc/designstyle.css b/src/sparsehash-1.6/doc/designstyle.css deleted file mode 100644 index f5d1ec2..0000000 --- a/src/sparsehash-1.6/doc/designstyle.css +++ /dev/null @@ -1,115 +0,0 @@ -body { - background-color: #ffffff; - color: black; - margin-right: 1in; - margin-left: 1in; -} - - -h1, h2, h3, h4, h5, h6 { - color: #3366ff; - font-family: sans-serif; -} -@media print { - /* Darker version for printing */ - h1, h2, h3, h4, h5, h6 { - color: #000080; - font-family: helvetica, sans-serif; - } -} - -h1 { - text-align: center; - font-size: 18pt; -} -h2 { - margin-left: -0.5in; -} -h3 { - margin-left: -0.25in; -} -h4 { - margin-left: -0.125in; -} -hr { - margin-left: -1in; -} - -/* Definition lists: definition term bold */ -dt { - font-weight: bold; -} - -address { - text-align: right; -} -/* Use the tag for bits of code and for variables and objects. */
-code,pre,samp,var {
- color: #006000;
-}
-/* Use the tag for file and directory paths and names. */
-file {
- color: #905050;
- font-family: monospace;
-}
-/* Use the tag for stuff the user should type. */
-kbd {
- color: #600000;
-}
-div.note p {
- float: right;
- width: 3in;
- margin-right: 0%;
- padding: 1px;
- border: 2px solid #6060a0;
- background-color: #fffff0;
-}
-
-UL.nobullets {
- list-style-type: none;
- list-style-image: none;
- margin-left: -1em;
-}
-
-/*
-body:after {
- content: "Google Confidential";
-}
-*/
-
-/* pretty printing styles. See prettify.js */
-.str { color: #080; }
-.kwd { color: #008; }
-.com { color: #800; }
-.typ { color: #606; }
-.lit { color: #066; }
-.pun { color: #660; }
-.pln { color: #000; }
-.tag { color: #008; }
-.atn { color: #606; }
-.atv { color: #080; }
-pre.prettyprint { padding: 2px; border: 1px solid #888; }
-
-.embsrc { background: #eee; }
-
-@media print {
- .str { color: #060; }
- .kwd { color: #006; font-weight: bold; }
- .com { color: #600; font-style: italic; }
- .typ { color: #404; font-weight: bold; }
- .lit { color: #044; }
- .pun { color: #440; }
- .pln { color: #000; }
- .tag { color: #006; font-weight: bold; }
- .atn { color: #404; }
- .atv { color: #060; }
-}
-
-/* Table Column Headers */
-.hdr {
- color: #006;
- font-weight: bold;
- background-color: #dddddd; }
-.hdr2 {
- color: #006;
- background-color: #eeeeee; }
\ No newline at end of file
diff --git a/src/sparsehash-1.6/doc/implementation.html b/src/sparsehash-1.6/doc/implementation.html
deleted file mode 100644
index 2050d54..0000000
--- a/src/sparsehash-1.6/doc/implementation.html
+++ /dev/null
@@ -1,371 +0,0 @@
-
-
-
-Implementation notes: sparse_hash, dense_hash, sparsetable
-
-
-
-
-Implementation of sparse_hash_map, dense_hash_map, and
-sparsetable
-
-This document contains a few notes on how the data structures in this
-package are implemented. This discussion refers at several points to
-the classic text in this area: Knuth, The Art of Computer
-Programming, Vol 3, Hashing.
-
-
-
-sparsetable
-
-For specificity, consider the declaration
-
-
- sparsetable<Foo> t(100); // a sparse array with 100 elements
-
-
-A sparsetable is a random container that implements a sparse array,
-that is, an array that uses very little memory to store unassigned
-indices (in this case, between 1-2 bits per unassigned index). For
-instance, if you allocate an array of size 5 and assign a[2] = [big
-struct], then a[2] will take up a lot of memory but a[0], a[1], a[3],
-and a[4] will not. Array elements that have a value are called
-"assigned". Array elements that have no value yet, or have had their
-value cleared using erase() or clear(), are called "unassigned".
-For assigned elements, lookups return the assigned value; for
-unassigned elements, they return the default value, which for t is
-Foo().
-
-sparsetable is implemented as an array of "groups". Each group is
-responsible for M array indices. The first group knows about
-t[0]..t[M-1], the second about t[M]..t[2M-1], and so forth. (M is 48
-by default.) At construct time, t creates an array of (99/M + 1)
-groups. From this point on, all operations -- insert, delete, lookup
--- are passed to the appropriate group. In particular, any operation
-on t[i] is actually performed on (t.group[i / M])[i % M].
-
-Each group contains of a vector, which holds assigned values, and a
-bitmap of size M, which indicates which indices are assigned. A
-lookup works as follows: the group is asked to look up index i, where
-i < M. The group looks at bitmap[i]. If it's 0, the lookup fails.
-If it's 1, then the group has to find the appropriate value in the
-vector.
-
-find()
-
-Finding the appropriate vector element is the most expensive part of
-the lookup. The code counts all bitmap entries <= i that are set to
-1. (There's at least 1 of them, since bitmap[i] is 1.) Suppose there
-are 4 such entries. Then the right value to return is the 4th element
-of the vector: vector[3]. This takes time O(M), which is a constant
-since M is a constant.
-
-insert()
-
-Insert starts with a lookup. If the lookup succeeds, the code merely
-replaces vector[3] with the new value. If the lookup fails, then the
-code must insert a new entry into the middle of the vector. Again, to
-insert at position i, the code must count all the bitmap entries <= i
-that are set to i. This indicates the position to insert into the
-vector. All vector entries above that position must be moved to make
-room for the new entry. This takes time, but still constant time
-since the vector has size at most M.
-
-(Inserts could be made faster by using a list instead of a vector to
-hold group values, but this would use much more memory, since each
-list element requires a full pointer of overhead.)
-
-The only metadata that needs to be updated, after the actual value is
-inserted, is to set bitmap[i] to 1. No other counts must be
-maintained.
-
-delete()
-
-Deletes are similar to inserts. They start with a lookup. If it
-fails, the delete is a noop. Otherwise, the appropriate entry is
-removed from the vector, all the vector elements above it are moved
-down one, and bitmap[i] is set to 0.
-
-iterators
-
-Sparsetable iterators pose a special burden. They must iterate over
-unassigned array values, but the act of iterating should not cause an
-assignment to happen -- otherwise, iterating over a sparsetable would
-cause it to take up much more room. For const iterators, the matter
-is simple: the iterator is merely programmed to return the default
-value -- Foo() -- when dereferenced while pointing to an unassigned
-entry.
-
-For non-const iterators, such simple techniques fail. Instead,
-dereferencing a sparsetable_iterator returns an opaque object that
-acts like a Foo in almost all situations, but isn't actually a Foo.
-(It does this by defining operator=(), operator value_type(), and,
-most sneakily, operator&().) This works in almost all cases. If it
-doesn't, an explicit cast to value_type will solve the problem:
-
-
- printf("%d", static_cast<Foo>(*t.find(0)));
-
-
-To avoid such problems, consider using get() and set() instead of an
-iterator:
-
-
- for (int i = 0; i < t.size(); ++i)
- if (t.get(i) == ...) t.set(i, ...);
-
-
-Sparsetable also has a special class of iterator, besides normal and
-const: nonempty_iterator. This only iterates over array values that
-are assigned. This is particularly fast given the sparsetable
-implementation, since it can ignore the bitmaps entirely and just
-iterate over the various group vectors.
-
-Resource use
-
-The space overhead for an sparsetable of size N is N + 48N/M bits.
-For the default value of M, this is exactly 2 bits per array entry.
-(That's for 32-bit pointers; for machines with 64-bit pointers, it's N
-+ 80N/M bits, or 2.67 bits per entry.)
-A larger M would use less overhead -- approaching 1 bit per array
-entry -- but take longer for inserts, deletes, and lookups. A smaller
-M would use more overhead but make operations somewhat faster.
-
-You can also look at some specific performance numbers.
-
-
-
-sparse_hash_set
-
-For specificity, consider the declaration
-
-
- sparse_hash_set<Foo> t;
-
-
-sparse_hash_set is a hashtable. For more information on hashtables,
-see Knuth. Hashtables are basically arrays with complicated logic on
-top of them. sparse_hash_set uses a sparsetable to implement the
-underlying array.
-
-In particular, sparse_hash_set stores its data in a sparsetable using
-quadratic internal probing (see Knuth). Many hashtable
-implementations use external probing, so each table element is
-actually a pointer chain, holding many hashtable values.
-sparse_hash_set, on the other hand, always stores at most one value in
-each table location. If the hashtable wants to store a second value
-at a given table location, it can't; it's forced to look somewhere
-else.
-
-insert()
-
-As a specific example, suppose t is a new sparse_hash_set. It then
-holds a sparsetable of size 32. The code for t.insert(foo) works as
-follows:
-
-
-1) Call hash<Foo>(foo) to convert foo into an integer i. (hash<Foo> is
- the default hash function; you can specify a different one in the
- template arguments.)
-
-
-2a) Look at t.sparsetable[i % 32]. If it's unassigned, assign it to
- foo. foo is now in the hashtable.
-
-
-2b) If t.sparsetable[i % 32] is assigned, and its value is foo, then
- do nothing: foo was already in t and the insert is a noop.
-
-
-2c) If t.sparsetable[i % 32] is assigned, but to a value other than
- foo, look at t.sparsetable[(i+1) % 32]. If that also fails, try
- t.sparsetable[(i+3) % 32], then t.sparsetable[(i+6) % 32]. In
- general, keep trying the next triangular number.
-
-
-3) If the table is now "too full" -- say, 25 of the 32 table entries
- are now assigned -- grow the table by creating a new sparsetable
- that's twice as big, and rehashing every single element from the
- old table into the new one. This keeps the table from ever filling
- up.
-
-
-4) If the table is now "too empty" -- say, only 3 of the 32 table
- entries are now assigned -- shrink the table by creating a new
- sparsetable that's half as big, and rehashing every element as in
- the growing case. This keeps the table overhead proportional to
- the number of elements in the table.
-
-
-Instead of using triangular numbers as offsets, one could just use
-regular integers: try i, then i+1, then i+2, then i+3. This has bad
-'clumping' behavior, as explored in Knuth. Quadratic probing, using
-the triangular numbers, avoids the clumping while keeping cache
-coherency in the common case. As long as the table size is a power of
-2, the quadratic-probing method described above will explore every
-table element if necessary, to find a good place to insert.
-
-(As a side note, using a table size that's a power of two has several
-advantages, including the speed of calculating (i % table_size). On
-the other hand, power-of-two tables are not very forgiving of a poor
-hash function. Make sure your hash function is a good one! There are
-plenty of dos and don'ts on the web (and in Knuth), for writing hash
-functions.)
-
-The "too full" value, also called the "maximum occupancy", determines
-a time-space tradeoff: in general, the higher it is, the less space is
-wasted but the more probes must be performed for each insert.
-sparse_hash_set uses a high maximum occupancy, since space is more
-important than speed for this data structure.
-
-The "too empty" value is not necessary for performance but helps with
-space use. It's rare for hashtable implementations to check this
-value at insert() time -- after all, how will inserting cause a
-hashtable to get too small? However, the sparse_hash_set
-implementation never resizes on erase(); it's nice to have an erase()
-that does not invalidate iterators. Thus, the first insert() after a
-long string of erase()s could well trigger a hashtable shrink.
-
-find()
-
-find() works similarly to insert. The only difference is in step
-(2a): if the value is unassigned, then the lookup fails immediately.
-
-delete()
-
-delete() is tricky in an internal-probing scheme. The obvious
-implementation of just "unassigning" the relevant table entry doesn't
-work. Consider the following scenario:
-
-
- t.insert(foo1); // foo1 hashes to 4, is put in table[4]
- t.insert(foo2); // foo2 hashes to 4, is put in table[5]
- t.erase(foo1); // table[4] is now 'unassigned'
- t.lookup(foo2); // fails since table[hash(foo2)] is unassigned
-
-
-To avoid these failure situations, delete(foo1) is actually
-implemented by replacing foo1 by a special 'delete' value in the
-hashtable. This 'delete' value causes the table entry to be
-considered unassigned for the purposes of insertion -- if foo3 hashes
-to 4 as well, it can go into table[4] no problem -- but assigned for
-the purposes of lookup.
-
-What is this special 'delete' value? The delete value has to be an
-element of type Foo, since the table can't hold anything else. It
-obviously must be an element the client would never want to insert on
-its own, or else the code couldn't distinguish deleted entries from
-'real' entries with the same value. There's no way to determine a
-good value automatically. The client has to specify it explicitly.
-This is what the set_deleted_key() method does.
-
-Note that set_deleted_key() is only necessary if the client actually
-wants to call t.erase(). For insert-only hash-sets, set_deleted_key()
-is unnecessary.
-
-When copying the hashtable, either to grow it or shrink it, the
-special 'delete' values are not copied into the new table. The
-copy-time rehash makes them unnecessary.
-
-Resource use
-
-The data is stored in a sparsetable, so space use is the same as
-for sparsetable. However, by default the sparse_hash_set
-implementation tries to keep about half the table buckets empty, to
-keep lookup-chains short. Since sparsehashmap has about 2 bits
-overhead per bucket (or 2.5 bits on 64-bit systems), sparse_hash_map
-has about 4-5 bits overhead per hashtable item.
-
-Time use is also determined in large part by the sparsetable
-implementation. However, there is also an extra probing cost in
-hashtables, which depends in large part on the "too full" value. It
-should be rare to need more than 4-5 probes per lookup, and usually
-significantly less will suffice.
-
-A note on growing and shrinking the hashtable: all hashtable
-implementations use the most memory when growing a hashtable, since
-they must have room for both the old table and the new table at the
-same time. sparse_hash_set is careful to delete entries from the old
-hashtable as soon as they're copied into the new one, to minimize this
-space overhead. (It does this efficiently by using its knowledge of
-the sparsetable class and copying one sparsetable group at a time.)
-
-You can also look at some specific performance numbers.
-
-
-
-sparse_hash_map
-
-sparse_hash_map is implemented identically to sparse_hash_set. The
-only difference is instead of storing just Foo in each table entry,
-the data structure stores pair<Foo, Value>.
-
-
-
-dense_hash_set
-
-The hashtable aspects of dense_hash_set are identical to
-sparse_hash_set: it uses quadratic internal probing, and resizes
-hashtables in exactly the same way. The difference is in the
-underlying array: instead of using a sparsetable, dense_hash_set uses
-a C array. This means much more space is used, especially if Foo is
-big. However, it makes all operations faster, since sparsetable has
-memory management overhead that C arrays do not.
-
-The use of C arrays instead of sparsetables points to one immediate
-complication dense_hash_set has that sparse_hash_set does not: the
-need to distinguish assigned from unassigned entries. In a
-sparsetable, this is accomplished by a bitmap. dense_hash_set, on the
-other hand, uses a dedicated value to specify unassigned entries.
-Thus, dense_hash_set has two special values: one to indicate deleted
-table entries, and one to indicated unassigned table entries. At
-construct time, all table entries are initialized to 'unassigned'.
-
-dense_hash_set provides the method set_empty_key() to indicate the
-value that should be used for unassigned entries. Like
-set_deleted_key(), set_empty_key() requires a value that will not be
-used by the client for any legitimate purpose. Unlike
-set_deleted_key(), set_empty_key() is always required, no matter what
-hashtable operations the client wishes to perform.
-
-Resource use
-
-This implementation is fast because even though dense_hash_set may not
-be space efficient, most lookups are localized: a single lookup may
-need to access table[i], and maybe table[i+1] and table[i+3], but
-nothing other than that. For all but the biggest data structures,
-these will frequently be in a single cache line.
-
-This implementation takes, for every unused bucket, space as big as
-the key-type. Usually between half and two-thirds of the buckets are
-empty.
-
-The doubling method used by dense_hash_set tends to work poorly
-with most memory allocators. This is because memory allocators tend
-to have memory 'buckets' which are a power of two. Since each
-doubling of a dense_hash_set doubles the memory use, a single
-hashtable doubling will require a new memory 'bucket' from the memory
-allocator, leaving the old bucket stranded as fragmented memory.
-Hence, it's not recommended this data structure be used with many
-inserts in memory-constrained situations.
-
-You can also look at some specific performance numbers.
-
-
-
-dense_hash_map
-
-dense_hash_map is identical to dense_hash_set except for what values
-are stored in each table entry.
-
-
-
-Craig Silverstein
-Thu Jan 6 20:15:42 PST 2005
-
-
-
-
diff --git a/src/sparsehash-1.6/doc/index.html b/src/sparsehash-1.6/doc/index.html
deleted file mode 100644
index 68a5865..0000000
--- a/src/sparsehash-1.6/doc/index.html
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
-
- Google Sparsehash Package
-
-
-
-
-
-
-
-
- Google Sparsehash Package
-
-
-The Google sparsehash package consists of two hashtable
-implementations: sparse, which is designed to be very space
-efficient, and dense, which is designed to be very time
-efficient. For each one, the package provides both a hash-map and a
-hash-set, to mirror the classes in the common STL implementation.
-
-Documentation on how to use these classes:
-
-
-In addition to the hash-map (and hash-set) classes, there's also a
-lower-level class that implements a "sparse" array. This class can be
-useful in its own right; consider using it when you'd normally use a
-sparse_hash_map
, but your keys are all small-ish
-integers.
-
- - sparsetable
-
-
-There is also a doc explaining the implementation details of these
-classes, for those who are curious. And finally, you can see some
-performance comparisons, both between
-the various classes here, but also between these implementations and
-other standard hashtable implementations.
-
-
-
-Craig Silverstein
-Last modified: Thu Jan 25 17:58:02 PST 2007
-
-
-
-
diff --git a/src/sparsehash-1.6/doc/performance.html b/src/sparsehash-1.6/doc/performance.html
deleted file mode 100644
index 40c1406..0000000
--- a/src/sparsehash-1.6/doc/performance.html
+++ /dev/null
@@ -1,96 +0,0 @@
-
-
-
-Performance notes: sparse_hash, dense_hash, sparsetable
-
-
-
-
-Performance Numbers
-
-Here are some performance numbers from an example desktop machine,
-taken from a version of time_hash_map that was instrumented to also
-report memory allocation information (this modification is not
-included by default because it required a big hack to do, including
-modifying the STL code to not try to do its own freelist management).
-
-Note there are lots of caveats on these numbers: they may differ from
-machine to machine and compiler to compiler, and they only test a very
-particular usage pattern that may not match how you use hashtables --
-for instance, they test hashtables with very small keys. However,
-they're still useful for a baseline comparison of the various
-hashtable implementations.
-
-These figures are from a 2.80GHz Pentium 4 with 2G of memory. The
-'standard' hash_map and map implementations are the SGI STL code
-included with gcc2. Compiled with gcc2.95.3 -g
--O2
-
-
-======
-Average over 10000000 iterations
-Wed Dec 8 14:56:38 PST 2004
-
-SPARSE_HASH_MAP:
-map_grow 665 ns
-map_predict/grow 303 ns
-map_replace 177 ns
-map_fetch 117 ns
-map_remove 192 ns
-memory used in map_grow 84.3956 Mbytes
-
-DENSE_HASH_MAP:
-map_grow 84 ns
-map_predict/grow 22 ns
-map_replace 18 ns
-map_fetch 13 ns
-map_remove 23 ns
-memory used in map_grow 256.0000 Mbytes
-
-STANDARD HASH_MAP:
-map_grow 162 ns
-map_predict/grow 107 ns
-map_replace 44 ns
-map_fetch 22 ns
-map_remove 124 ns
-memory used in map_grow 204.1643 Mbytes
-
-STANDARD MAP:
-map_grow 297 ns
-map_predict/grow 282 ns
-map_replace 113 ns
-map_fetch 113 ns
-map_remove 238 ns
-memory used in map_grow 236.8081 Mbytes
-
-
-
-A Note on Hash Functions
-
-For good performance, the Google hash routines depend on a good
-hash function: one that distributes data evenly. Many hashtable
-implementations come with sub-optimal hash functions that can degrade
-performance. For instance, the hash function given in Knuth's _Art of
-Computer Programming_, and the default string hash function in SGI's
-STL implementation, both distribute certain data sets unevenly,
-leading to poor performance.
-
-As an example, in one test of the default SGI STL string hash
-function against the Hsieh hash function (see below), for a particular
-set of string keys, the Hsieh function resulted in hashtable lookups
-that were 20 times as fast as the STLPort hash function. The string
-keys were chosen to be "hard" to hash well, so these results may not
-be typical, but they are suggestive.
-
-There has been much research over the years into good hash
-functions. Here are some hash functions of note.
-
-
- - Bob Jenkins: http://burtleburtle.net/bob/hash/
-
- Paul Hsieh: http://www.azillionmonkeys.com/qed/hash.html
-
- Fowler/Noll/Vo (FNV): http://www.isthe.com/chongo/tech/comp/fnv/
-
- MurmurHash: http://murmurhash.googlepages.com/
-
-
-
-
diff --git a/src/sparsehash-1.6/doc/sparse_hash_map.html b/src/sparsehash-1.6/doc/sparse_hash_map.html
deleted file mode 100644
index 63055c9..0000000
--- a/src/sparsehash-1.6/doc/sparse_hash_map.html
+++ /dev/null
@@ -1,1527 +0,0 @@
-
-
-
-
-
-sparse_hash_map<Key, Data, HashFcn, EqualKey, Alloc>
-
-
-
-
-[Note: this document is formatted similarly to the SGI STL
-implementation documentation pages, and refers to concepts and classes
-defined there. However, neither this document nor the code it
-describes is associated with SGI, nor is it necessary to have SGI's
-STL implementation installed in order to use this class.]
-
-
-sparse_hash_map<Key, Data, HashFcn, EqualKey, Alloc>
-
-sparse_hash_map is a Hashed
-Associative Container that associates objects of type Key
-with objects of type Data. sparse_hash_map is a Pair
-Associative Container, meaning that its value type is pair<const Key, Data>. It is also a
-Unique
-Associative Container, meaning that no two elements have keys that
-compare equal using EqualKey.
-
-Looking up an element in a sparse_hash_map by its key is
-efficient, so sparse_hash_map is useful for "dictionaries"
-where the order of elements is irrelevant. If it is important for the
-elements to be in a particular order, however, then map is more appropriate.
-
-sparse_hash_map is distinguished from other hash-map
-implementations by its stingy use of memory and by the ability to save
-and restore contents to disk. On the other hand, this hash-map
-implementation, while still efficient, is slower than other hash-map
-implementations, and it also has requirements -- for instance, for a
-distinguished "deleted key" -- that may not be easy for all
-applications to satisfy.
-
-This class is appropriate for applications that need to store
-large "dictionaries" in memory, or for applications that need these
-dictionaries to be persistent.
-
-
-Example
-
-(Note: this example uses SGI semantics for hash<>
--- the kind used by gcc and most Unix compiler suites -- and not
-Dinkumware semantics -- the kind used by Microsoft Visual Studio. If
-you are using MSVC, this example will not compile as-is: you'll need
-to change hash
to hash_compare
, and you
-won't use eqstr
at all. See the MSVC documentation for
-hash_map
and hash_compare
, for more
-details.)
-
-
-#include <iostream>
-#include <google/sparse_hash_map>
-
-using google::sparse_hash_map; // namespace where class lives by default
-using std::cout;
-using std::endl;
-using ext::hash; // or __gnu_cxx::hash, or maybe tr1::hash, depending on your OS
-
-struct eqstr
-{
- bool operator()(const char* s1, const char* s2) const
- {
- return (s1 == s2) || (s1 && s2 && strcmp(s1, s2) == 0);
- }
-};
-
-int main()
-{
- sparse_hash_map<const char*, int, hash<const char*>, eqstr> months;
-
- months["january"] = 31;
- months["february"] = 28;
- months["march"] = 31;
- months["april"] = 30;
- months["may"] = 31;
- months["june"] = 30;
- months["july"] = 31;
- months["august"] = 31;
- months["september"] = 30;
- months["october"] = 31;
- months["november"] = 30;
- months["december"] = 31;
-
- cout << "september -> " << months["september"] << endl;
- cout << "april -> " << months["april"] << endl;
- cout << "june -> " << months["june"] << endl;
- cout << "november -> " << months["november"] << endl;
-}
-
-
-
-Definition
-
-Defined in the header sparse_hash_map.
-This class is not part of the C++ standard, though it is mostly
-compatible with the tr1 class unordered_map
.
-
-
-Template parameters
-
-
-Parameter Description Default
-
-
-
- Key
-
-
- The hash_map's key type. This is also defined as
- sparse_hash_map::key_type.
-
-
-
-
-
-
-
-
- Data
-
-
- The hash_map's data type. This is also defined as
- sparse_hash_map::data_type.
-
-
-
-
-
-
-
-
- HashFcn
-
-
- The hash function used by the
- hash_map. This is also defined as sparse_hash_map::hasher.
-
Note: Hashtable performance depends heavliy on the choice of
- hash function. See the performance
- page for more information.
-
-
- hash<Key>
-
-
-
-
-
- EqualKey
-
-
- The hash_map key equality function: a binary predicate that determines
- whether two keys are equal. This is also defined as
- sparse_hash_map::key_equal.
-
-
- equal_to<Key>
-
-
-
-
-
- Alloc
-
-
- Ignored; this is included only for API-compatibility
- with SGI's (and tr1's) STL implementation.
-
-
-
-
-
-
-
-
-Model of
-
-Unique Hashed Associative Container,
-Pair Associative Container
-
-
-Type requirements
-
-
--
-Key is Assignable.
-
-
-EqualKey is a Binary Predicate whose argument type is Key.
-
-
-EqualKey is an equivalence relation.
-
-
-Alloc is an Allocator.
-
-
-
-Public base classes
-
-None.
-
-
-Members
-
-
-Member Where defined Description
-
-
-
- key_type
-
-
- Associative
- Container
-
-
- The sparse_hash_map's key type, Key.
-
-
-
-
-
- data_type
-
-
- Pair
- Associative Container
-
-
- The type of object associated with the keys.
-
-
-
-
-
- value_type
-
-
- Pair
- Associative Container
-
-
- The type of object, pair<const key_type, data_type>,
- stored in the hash_map.
-
-
-
-
-
- hasher
-
-
- Hashed
- Associative Container
-
-
- The sparse_hash_map's hash
- function.
-
-
-
-
-
- key_equal
-
-
- Hashed
- Associative Container
-
-
- Function
- object that compares keys for equality.
-
-
-
-
-
- allocator_type
-
-
- Unordered Associative Container (tr1)
-
-
- The type of the Allocator given as a template parameter.
-
-
-
-
-
- pointer
-
-
- Container
-
-
- Pointer to T.
-
-
-
-
-
- reference
-
-
- Container
-
-
- Reference to T
-
-
-
-
-
- const_reference
-
-
- Container
-
-
- Const reference to T
-
-
-
-
-
- size_type
-
-
- Container
-
-
- An unsigned integral type.
-
-
-
-
-
- difference_type
-
-
- Container
-
-
- A signed integral type.
-
-
-
-
-
- iterator
-
-
- Container
-
-
- Iterator used to iterate through a sparse_hash_map. [1]
-
-
-
-
-
- const_iterator
-
-
- Container
-
-
- Const iterator used to iterate through a sparse_hash_map.
-
-
-
-
-
- local_iterator
-
-
- Unordered Associative Container (tr1)
-
-
- Iterator used to iterate through a subset of
- sparse_hash_map. [1]
-
-
-
-
-
- const_local_iterator
-
-
- Unordered Associative Container (tr1)
-
-
- Const iterator used to iterate through a subset of
- sparse_hash_map.
-
-
-
-
-
- iterator begin()
-
-
- Container
-
-
- Returns an iterator pointing to the beginning of the
- sparse_hash_map.
-
-
-
-
-
- iterator end()
-
-
- Container
-
-
- Returns an iterator pointing to the end of the
- sparse_hash_map.
-
-
-
-
-
- const_iterator begin() const
-
-
- Container
-
-
- Returns an const_iterator pointing to the beginning of the
- sparse_hash_map.
-
-
-
-
-
- const_iterator end() const
-
-
- Container
-
-
- Returns an const_iterator pointing to the end of the
- sparse_hash_map.
-
-
-
-
-
- local_iterator begin(size_type i)
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a local_iterator pointing to the beginning of bucket
- i in the sparse_hash_map.
-
-
-
-
-
- local_iterator end(size_type i)
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a local_iterator pointing to the end of bucket
- i in the sparse_hash_map. For
- sparse_hash_map, each bucket contains either 0 or 1 item.
-
-
-
-
-
- const_local_iterator begin(size_type i) const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a const_local_iterator pointing to the beginning of bucket
- i in the sparse_hash_map.
-
-
-
-
-
- const_local_iterator end(size_type i) const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a const_local_iterator pointing to the end of bucket
- i in the sparse_hash_map. For
- sparse_hash_map, each bucket contains either 0 or 1 item.
-
-
-
-
-
- size_type size() const
-
-
- Container
-
-
- Returns the size of the sparse_hash_map.
-
-
-
-
-
- size_type max_size() const
-
-
- Container
-
-
- Returns the largest possible size of the sparse_hash_map.
-
-
-
-
-
- bool empty() const
-
-
- Container
-
-
- true if the sparse_hash_map's size is 0.
-
-
-
-
-
- size_type bucket_count() const
-
-
- Hashed
- Associative Container
-
-
- Returns the number of buckets used by the sparse_hash_map.
-
-
-
-
-
- size_type max_bucket_count() const
-
-
- Hashed
- Associative Container
-
-
- Returns the largest possible number of buckets used by the sparse_hash_map.
-
-
-
-
-
- size_type bucket_size(size_type i) const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns the number of elements in bucket i. For
- sparse_hash_map, this will be either 0 or 1.
-
-
-
-
-
- size_type bucket(const key_type& key) const
-
-
- Unordered Associative Container (tr1)
-
-
- If the key exists in the map, returns the index of the bucket
- containing the given key, otherwise, return the bucket the key
- would be inserted into.
- This value may be passed to begin(size_type) and
- end(size_type).
-
-
-
-
-
- float load_factor() const
-
-
- Unordered Associative Container (tr1)
-
-
- The number of elements in the sparse_hash_map divided by
- the number of buckets.
-
-
-
-
-
- float max_load_factor() const
-
-
- Unordered Associative Container (tr1)
-
-
- The maximum load factor before increasing the number of buckets in
- the sparse_hash_map.
-
-
-
-
-
- void max_load_factor(float new_grow)
-
-
- Unordered Associative Container (tr1)
-
-
- Sets the maximum load factor before increasing the number of
- buckets in the sparse_hash_map.
-
-
-
-
-
- float min_load_factor() const
-
-
- sparse_hash_map
-
-
- The minimum load factor before decreasing the number of buckets in
- the sparse_hash_map.
-
-
-
-
-
- void min_load_factor(float new_grow)
-
-
- sparse_hash_map
-
-
- Sets the minimum load factor before decreasing the number of
- buckets in the sparse_hash_map.
-
-
-
-
-
- void set_resizing_parameters(float shrink, float grow)
-
-
- sparse_hash_map
-
-
- DEPRECATED. See below.
-
-
-
-
-
- void resize(size_type n)
-
-
- Hashed
- Associative Container
-
-
- Increases the bucket count to hold at least n items.
- [4] [5]
-
-
-
-
-
- void rehash(size_type n)
-
-
- Unordered Associative Container (tr1)
-
-
- Increases the bucket count to hold at least n items.
- This is identical to resize.
- [4] [5]
-
-
-
-
-
- hasher hash_funct() const
-
-
- Hashed
- Associative Container
-
-
- Returns the hasher object used by the sparse_hash_map.
-
-
-
-
-
- hasher hash_function() const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns the hasher object used by the sparse_hash_map.
- This is idential to hash_funct.
-
-
-
-
-
- key_equal key_eq() const
-
-
- Hashed
- Associative Container
-
-
- Returns the key_equal object used by the
- sparse_hash_map.
-
-
-
-
-
- sparse_hash_map()
-
-
- Container
-
-
- Creates an empty sparse_hash_map.
-
-
-
-
-
- sparse_hash_map(size_type n)
-
-
- Hashed
- Associative Container
-
-
- Creates an empty sparse_hash_map that's optimized for holding
- up to n items.
- [5]
-
-
-
-
-
- sparse_hash_map(size_type n, const hasher& h)
-
-
- Hashed
- Associative Container
-
-
- Creates an empty sparse_hash_map that's optimized for up
- to n items, using h as the hash function.
-
-
-
-
-
- sparse_hash_map(size_type n, const hasher& h, const
- key_equal& k)
-
-
- Hashed
- Associative Container
-
-
- Creates an empty sparse_hash_map that's optimized for up
- to n items, using h as the hash function and
- k as the key equal function.
-
-
-
-
-
- template <class InputIterator>
-sparse_hash_map(InputIterator f, InputIterator l)
-[2]
-
-
- Unique
- Hashed Associative Container
-
-
- Creates a sparse_hash_map with a copy of a range.
-
-
-
-
-
- template <class InputIterator>
-sparse_hash_map(InputIterator f, InputIterator l, size_type n)
-[2]
-
-
- Unique
- Hashed Associative Container
-
-
- Creates a hash_map with a copy of a range that's optimized to
- hold up to n items.
-
-
-
-
-
- template <class InputIterator>
-sparse_hash_map(InputIterator f, InputIterator l, size_type n, const
-hasher& h)
[2]
-
-
- Unique
- Hashed Associative Container
-
-
- Creates a hash_map with a copy of a range that's optimized to hold
- up to n items, using h as the hash function.
-
-
-
-
-
- template <class InputIterator>
-sparse_hash_map(InputIterator f, InputIterator l, size_type n, const
-hasher& h, const key_equal& k)
[2]
-
-
- Unique
- Hashed Associative Container
-
-
- Creates a hash_map with a copy of a range that's optimized for
- holding up to n items, using h as the hash
- function and k as the key equal function.
-
-
-
-
-
- sparse_hash_map(const hash_map&)
-
-
- Container
-
-
- The copy constructor.
-
-
-
-
-
- sparse_hash_map& operator=(const hash_map&)
-
-
- Container
-
-
- The assignment operator
-
-
-
-
-
- void swap(hash_map&)
-
-
- Container
-
-
- Swaps the contents of two hash_maps.
-
-
-
-
-
- pair<iterator, bool> insert(const value_type& x)
-
-
-
- Unique
- Associative Container
-
-
- Inserts x into the sparse_hash_map.
-
-
-
-
-
- template <class InputIterator>
-void insert(InputIterator f, InputIterator l)
[2]
-
-
- Unique
- Associative Container
-
-
- Inserts a range into the sparse_hash_map.
-
-
-
-
-
- void set_deleted_key(const key_type& key) [6]
-
-
- sparse_hash_map
-
-
- See below.
-
-
-
-
-
- void clear_deleted_key() [6]
-
-
- sparse_hash_map
-
-
- See below.
-
-
-
-
-
- void erase(iterator pos)
-
-
- Associative
- Container
-
-
- Erases the element pointed to by pos.
- [6]
-
-
-
-
-
- size_type erase(const key_type& k)
-
-
- Associative
- Container
-
-
- Erases the element whose key is k.
- [6]
-
-
-
-
-
- void erase(iterator first, iterator last)
-
-
- Associative
- Container
-
-
- Erases all elements in a range.
- [6]
-
-
-
-
-
- void clear()
-
-
- Associative
- Container
-
-
- Erases all of the elements.
-
-
-
-
-
- const_iterator find(const key_type& k) const
-
-
- Associative
- Container
-
-
- Finds an element whose key is k.
-
-
-
-
-
- iterator find(const key_type& k)
-
-
- Associative
- Container
-
-
- Finds an element whose key is k.
-
-
-
-
-
- size_type count(const key_type& k) const
-
-
- Unique
- Associative Container
-
-
- Counts the number of elements whose key is k.
-
-
-
-
-
- pair<const_iterator, const_iterator> equal_range(const
-key_type& k) const
-
-
- Associative
- Container
-
-
- Finds a range containing all elements whose key is k.
-
-
-
-
-
- pair<iterator, iterator> equal_range(const
-key_type& k)
-
-
- Associative
- Container
-
-
- Finds a range containing all elements whose key is k.
-
-
-
-
-
- data_type& operator[](const key_type& k) [3]
-
-
- sparse_hash_map
-
-
- See below.
-
-
-
-
-
- bool write_metadata(FILE *fp)
-
-
- sparse_hash_map
-
-
- See below.
-
-
-
-
-
- bool read_metadata(FILE *fp)
-
-
- sparse_hash_map
-
-
- See below.
-
-
-
-
-
- bool write_nopointer_data(FILE *fp)
-
-
- sparse_hash_map
-
-
- See below.
-
-
-
-
-
- bool read_nopointer_data(FILE *fp)
-
-
- sparse_hash_map
-
-
- See below.
-
-
-
-
-
- bool operator==(const hash_map&, const hash_map&)
-
-
-
- Hashed
- Associative Container
-
-
- Tests two hash_maps for equality. This is a global function, not a
- member function.
-
-
-
-
-
-
-New members
-
-These members are not defined in the Unique
-Hashed Associative Container, Pair
-Associative Container, or tr1's
-Unordered Associative Container requirements,
-but are specific to sparse_hash_map.
-
-
-Member Description
-
-
-
-
- void set_deleted_key(const key_type& key)
-
-
- Sets the distinguished "deleted" key to key. This must be
- called before any calls to erase(). [6]
-
-
-
-
-
- void clear_deleted_key()
-
-
- Clears the distinguished "deleted" key. After this is called,
- calls to erase() are not valid on this object.
- [6]
-
-
-
-
-
-
-data_type&
-operator[](const key_type& k) [3]
-
-
-
- Returns a reference to the object that is associated with
- a particular key. If the sparse_hash_map does not already
- contain such an object, operator[] inserts the default
- object data_type(). [3]
-
-
-
-
- void set_resizing_parameters(float shrink, float grow)
-
-
- This function is DEPRECATED. It is equivalent to calling
- min_load_factor(shrink); max_load_factor(grow).
-
-
-
-
-
- bool write_metadata(FILE *fp)
-
-
- Write hashtable metadata to fp. See below.
-
-
-
-
-
- bool read_metadata(FILE *fp)
-
-
- Read hashtable metadata from fp. See below.
-
-
-
-
-
- bool write_nopointer_data(FILE *fp)
-
-
- Write hashtable contents to fp. This is valid only if the
- hashtable key and value are "plain" data. See below.
-
-
-
-
-
- bool read_nopointer_data(FILE *fp)
-
-
- Read hashtable contents to fp. This is valid only if the
- hashtable key and value are "plain" data. See below.
-
-
-
-
-
-
-Notes
-
-[1]
-
-sparse_hash_map::iterator is not a mutable iterator, because
-sparse_hash_map::value_type is not Assignable.
-That is, if i is of type sparse_hash_map::iterator
-and p is of type sparse_hash_map::value_type, then
-*i = p is not a valid expression. However,
-sparse_hash_map::iterator isn't a constant iterator either,
-because it can be used to modify the object that it points to. Using
-the same notation as above, (*i).second = p is a valid
-expression.
-
-[2]
-
-This member function relies on member template functions, which
-may not be supported by all compilers. If your compiler supports
-member templates, you can call this function with any type of input
-iterator. If your compiler does not yet support member templates,
-though, then the arguments must either be of type const
-value_type* or of type sparse_hash_map::const_iterator.
-
-[3]
-
-Since operator[] might insert a new element into the
-sparse_hash_map, it can't possibly be a const member
-function. Note that the definition of operator[] is
-extremely simple: m[k] is equivalent to
-(*((m.insert(value_type(k, data_type()))).first)).second.
-Strictly speaking, this member function is unnecessary: it exists only
-for convenience.
-
-[4]
-
-In order to preserve iterators, erasing hashtable elements does not
-cause a hashtable to resize. This means that after a string of
-erase() calls, the hashtable will use more space than is
-required. At a cost of invalidating all current iterators, you can
-call resize() to manually compact the hashtable. The
-hashtable promotes too-small resize() arguments to the
-smallest legal value, so to compact a hashtable, it's sufficient to
-call resize(0).
-
-
[5]
-
-Unlike some other hashtable implementations, the optional n in
-the calls to the constructor, resize, and rehash
-indicates not the desired number of buckets that
-should be allocated, but instead the expected number of items to be
-inserted. The class then sizes the hash-map appropriately for the
-number of items specified. It's not an error to actually insert more
-or fewer items into the hashtable, but the implementation is most
-efficient -- does the fewest hashtable resizes -- if the number of
-inserted items is n or slightly less.
-
-[6]
-
-sparse_hash_map requires you call
-set_deleted_key() before calling erase(). (This is
-the largest difference between the sparse_hash_map API and
-other hash-map APIs. See implementation.html
-for why this is necessary.)
-The argument to set_deleted_key() should be a key-value that
-is never used for legitimate hash-map entries. It is an error to call
-erase() without first calling set_deleted_key(), and
-it is also an error to call insert() with an item whose key
-is the "deleted key."
-
-There is no need to call set_deleted_key if you do not
-wish to call erase() on the hash-map.
-
-It is acceptable to change the deleted-key at any time by calling
-set_deleted_key() with a new argument. You can also call
-clear_deleted_key(), at which point all keys become valid for
-insertion but no hashtable entries can be deleted until
-set_deleted_key() is called again.
-
-Note: If you use set_deleted_key, it is also
-necessary that data_type has a zero-argument default
-constructor. This is because sparse_hash_map uses the
-special value pair(deleted_key, data_type()) to denote
-deleted buckets, and thus needs to be able to create
-data_type using a zero-argument constructor.
-
-If your data_type does not have a zero-argument default
-constructor, there are several workarounds:
-
- - Store a pointer to data_type in the map, instead of
- data_type directly. This may yield faster code as
- well, since hashtable-resizes will just have to move pointers
- around, rather than copying the entire data_type.
-
- Add a zero-argument default constructor to data_type.
-
- Subclass data_type and add a zero-argument default
- constructor to the subclass.
-
-
-If you do not use set_deleted_key, then there is no
-requirement that data_type havea zero-argument default
-constructor.
-
-
-
Input/Output
-
-It is possible to save and restore sparse_hash_map objects
-to disk. Storage takes place in two steps. The first writes the
-hashtable metadata. The second writes the actual data.
-
-To write a hashtable to disk, first call write_metadata()
-on an open file pointer. This saves the hashtable information in a
-byte-order-independent format.
-
-After the metadata has been written to disk, you must write the
-actual data stored in the hash-map to disk. If both the key and data
-are "simple" enough, you can do this by calling
-write_nopointer_data(). "Simple" data is data that can be
-safely copied to disk via fwrite(). Native C data types fall
-into this category, as do structs of native C data types. Pointers
-and STL objects do not.
-
-Note that write_nopointer_data() does not do any endian
-conversion. Thus, it is only appropriate when you intend to read the
-data on the same endian architecture as you write the data.
-
-If you cannot use write_nopointer_data() for any reason,
-you can write the data yourself by iterating over the
-sparse_hash_map with a const_iterator and writing
-the key and data in any manner you wish.
-
-To read the hashtable information from disk, first you must create
-a sparse_hash_map object. Then open a file pointer to point
-to the saved hashtable, and call read_metadata(). If you
-saved the data via write_nopointer_data(), you can follow the
-read_metadata() call with a call to
-read_nopointer_data(). This is all that is needed.
-
-If you saved the data through a custom write routine, you must call
-a custom read routine to read in the data. To do this, iterate over
-the sparse_hash_map with an iterator; this operation
-is sensical because the metadata has already been set up. For each
-iterator item, you can read the key and value from disk, and set it
-appropriately. You will need to do a const_cast on the
-iterator, since it->first is always const. You
-will also need to use placement-new if the key or value is a C++
-object. The code might look like this:
-
- for (sparse_hash_map<int*, ComplicatedClass>::iterator it = ht.begin();
- it != ht.end(); ++it) {
- // The key is stored in the sparse_hash_map as a pointer
- const_cast<int*>(it->first) = new int;
- fread(const_cast<int*>(it->first), sizeof(int), 1, fp);
- // The value is a complicated C++ class that takes an int to construct
- int ctor_arg;
- fread(&ctor_arg, sizeof(int), 1, fp);
- new (&it->second) ComplicatedClass(ctor_arg); // "placement new"
- }
-
-
-
-Validity of Iterators
-
-erase() is guaranteed not to invalidate any iterators --
-except for any iterators pointing to the item being erased, of course.
-insert() invalidates all iterators, as does
-resize().
-
-This is implemented by making erase() not resize the
-hashtable. If you desire maximum space efficiency, you can call
-resize(0) after a string of erase() calls, to force
-the hashtable to resize to the smallest possible size.
-
-In addition to invalidating iterators, insert()
-and resize() invalidate all pointers into the hashtable. If
-you want to store a pointer to an object held in a sparse_hash_map,
-either do so after finishing hashtable inserts, or store the object on
-the heap and a pointer to it in the sparse_hash_map.
-
-
-See also
-
-The following are SGI STL, and some Google STL, concepts and
-classes related to sparse_hash_map.
-
-hash_map,
-Associative Container,
-Hashed Associative Container,
-Pair Associative Container,
-Unique Hashed Associative Container,
-set,
-map
-multiset,
-multimap,
-hash_set,
-hash_multiset,
-hash_multimap,
-sparsetable,
-sparse_hash_set,
-dense_hash_set,
-dense_hash_map
-
-
-
diff --git a/src/sparsehash-1.6/doc/sparse_hash_set.html b/src/sparsehash-1.6/doc/sparse_hash_set.html
deleted file mode 100644
index 70c7721..0000000
--- a/src/sparsehash-1.6/doc/sparse_hash_set.html
+++ /dev/null
@@ -1,1376 +0,0 @@
-
-
-
-
-
-sparse_hash_set<Key, HashFcn, EqualKey, Alloc>
-
-
-
-
-[Note: this document is formatted similarly to the SGI STL
-implementation documentation pages, and refers to concepts and classes
-defined there. However, neither this document nor the code it
-describes is associated with SGI, nor is it necessary to have SGI's
-STL implementation installed in order to use this class.]
-
-
-sparse_hash_set<Key, HashFcn, EqualKey, Alloc>
-
-sparse_hash_set is a Hashed
-Associative Container that stores objects of type Key.
-sparse_hash_set is a Simple
-Associative Container, meaning that its value type, as well as its
-key type, is key. It is also a
-Unique
-Associative Container, meaning that no two elements have keys that
-compare equal using EqualKey.
-
-Looking up an element in a sparse_hash_set by its key is
-efficient, so sparse_hash_set is useful for "dictionaries"
-where the order of elements is irrelevant. If it is important for the
-elements to be in a particular order, however, then map is more appropriate.
-
-sparse_hash_set is distinguished from other hash-set
-implementations by its stingy use of memory and by the ability to save
-and restore contents to disk. On the other hand, this hash-set
-implementation, while still efficient, is slower than other hash-set
-implementations, and it also has requirements -- for instance, for a
-distinguished "deleted key" -- that may not be easy for all
-applications to satisfy.
-
-This class is appropriate for applications that need to store
-large "dictionaries" in memory, or for applications that need these
-dictionaries to be persistent.
-
-
-Example
-
-(Note: this example uses SGI semantics for hash<>
--- the kind used by gcc and most Unix compiler suites -- and not
-Dinkumware semantics -- the kind used by Microsoft Visual Studio. If
-you are using MSVC, this example will not compile as-is: you'll need
-to change hash
to hash_compare
, and you
-won't use eqstr
at all. See the MSVC documentation for
-hash_map
and hash_compare
, for more
-details.)
-
-
-#include <iostream>
-#include <google/sparse_hash_set>
-
-using google::sparse_hash_set; // namespace where class lives by default
-using std::cout;
-using std::endl;
-using ext::hash; // or __gnu_cxx::hash, or maybe tr1::hash, depending on your OS
-
-struct eqstr
-{
- bool operator()(const char* s1, const char* s2) const
- {
- return (s1 == s2) || (s1 && s2 && strcmp(s1, s2) == 0);
- }
-};
-
-void lookup(const hash_set<const char*, hash<const char*>, eqstr>& Set,
- const char* word)
-{
- sparse_hash_set<const char*, hash<const char*>, eqstr>::const_iterator it
- = Set.find(word);
- cout << word << ": "
- << (it != Set.end() ? "present" : "not present")
- << endl;
-}
-
-int main()
-{
- sparse_hash_set<const char*, hash<const char*>, eqstr> Set;
- Set.insert("kiwi");
- Set.insert("plum");
- Set.insert("apple");
- Set.insert("mango");
- Set.insert("apricot");
- Set.insert("banana");
-
- lookup(Set, "mango");
- lookup(Set, "apple");
- lookup(Set, "durian");
-}
-
-
-
-Definition
-
-Defined in the header sparse_hash_set.
-This class is not part of the C++ standard, though it is mostly
-compatible with the tr1 class unordered_set
.
-
-
-Template parameters
-
-
-Parameter Description Default
-
-
-
- Key
-
-
- The hash_set's key and value type. This is also defined as
- sparse_hash_set::key_type and
- sparse_hash_set::value_type.
-
-
-
-
-
-
-
-
- HashFcn
-
-
- The hash function used by the
- hash_set. This is also defined as sparse_hash_set::hasher.
-
Note: Hashtable performance depends heavliy on the choice of
- hash function. See the performance
- page for more information.
-
-
- hash<Key>
-
-
-
-
-
- EqualKey
-
-
- The hash_set key equality function: a binary predicate that determines
- whether two keys are equal. This is also defined as
- sparse_hash_set::key_equal.
-
-
- equal_to<Key>
-
-
-
-
-
- Alloc
-
-
- Ignored; this is included only for API-compatibility
- with SGI's (and tr1's) STL implementation.
-
-
-
-
-
-
-
-
-Model of
-
-Unique Hashed Associative Container,
-Simple Associative Container
-
-
-Type requirements
-
-
--
-Key is Assignable.
-
-
-EqualKey is a Binary Predicate whose argument type is Key.
-
-
-EqualKey is an equivalence relation.
-
-
-Alloc is an Allocator.
-
-
-
-Public base classes
-
-None.
-
-
-Members
-
-
-Member Where defined Description
-
-
-
- value_type
-
-
- Container
-
-
- The type of object, T, stored in the hash_set.
-
-
-
-
-
- key_type
-
-
- Associative
- Container
-
-
- The key type associated with value_type.
-
-
-
-
-
- hasher
-
-
- Hashed
- Associative Container
-
-
- The sparse_hash_set's hash
- function.
-
-
-
-
-
- key_equal
-
-
- Hashed
- Associative Container
-
-
- Function
- object that compares keys for equality.
-
-
-
-
-
- allocator_type
-
-
- Unordered Associative Container (tr1)
-
-
- The type of the Allocator given as a template parameter.
-
-
-
-
-
- pointer
-
-
- Container
-
-
- Pointer to T.
-
-
-
-
-
- reference
-
-
- Container
-
-
- Reference to T
-
-
-
-
-
- const_reference
-
-
- Container
-
-
- Const reference to T
-
-
-
-
-
- size_type
-
-
- Container
-
-
- An unsigned integral type.
-
-
-
-
-
- difference_type
-
-
- Container
-
-
- A signed integral type.
-
-
-
-
-
- iterator
-
-
- Container
-
-
- Iterator used to iterate through a sparse_hash_set.
-
-
-
-
-
- const_iterator
-
-
- Container
-
-
- Const iterator used to iterate through a sparse_hash_set.
- (iterator and const_iterator are the same type.)
-
-
-
-
-
- local_iterator
-
-
- Unordered Associative Container (tr1)
-
-
- Iterator used to iterate through a subset of
- sparse_hash_set.
-
-
-
-
-
- const_local_iterator
-
-
- Unordered Associative Container (tr1)
-
-
- Const iterator used to iterate through a subset of
- sparse_hash_set.
-
-
-
-
-
- iterator begin() const
-
-
- Container
-
-
- Returns an iterator pointing to the beginning of the
- sparse_hash_set.
-
-
-
-
-
- iterator end() const
-
-
- Container
-
-
- Returns an iterator pointing to the end of the
- sparse_hash_set.
-
-
-
-
-
- local_iterator begin(size_type i)
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a local_iterator pointing to the beginning of bucket
- i in the sparse_hash_set.
-
-
-
-
-
- local_iterator end(size_type i)
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a local_iterator pointing to the end of bucket
- i in the sparse_hash_set. For
- sparse_hash_set, each bucket contains either 0 or 1 item.
-
-
-
-
-
- const_local_iterator begin(size_type i) const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a const_local_iterator pointing to the beginning of bucket
- i in the sparse_hash_set.
-
-
-
-
-
- const_local_iterator end(size_type i) const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns a const_local_iterator pointing to the end of bucket
- i in the sparse_hash_set. For
- sparse_hash_set, each bucket contains either 0 or 1 item.
-
-
-
-
-
- size_type size() const
-
-
- Container
-
-
- Returns the size of the sparse_hash_set.
-
-
-
-
-
- size_type max_size() const
-
-
- Container
-
-
- Returns the largest possible size of the sparse_hash_set.
-
-
-
-
-
- bool empty() const
-
-
- Container
-
-
- true if the sparse_hash_set's size is 0.
-
-
-
-
-
- size_type bucket_count() const
-
-
- Hashed
- Associative Container
-
-
- Returns the number of buckets used by the sparse_hash_set.
-
-
-
-
-
- size_type max_bucket_count() const
-
-
- Hashed
- Associative Container
-
-
- Returns the largest possible number of buckets used by the sparse_hash_set.
-
-
-
-
-
- size_type bucket_size(size_type i) const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns the number of elements in bucket i. For
- sparse_hash_set, this will be either 0 or 1.
-
-
-
-
-
- size_type bucket(const key_type& key) const
-
-
- Unordered Associative Container (tr1)
-
-
- If the key exists in the map, returns the index of the bucket
- containing the given key, otherwise, return the bucket the key
- would be inserted into.
- This value may be passed to begin(size_type) and
- end(size_type).
-
-
-
-
-
- float load_factor() const
-
-
- Unordered Associative Container (tr1)
-
-
- The number of elements in the sparse_hash_set divided by
- the number of buckets.
-
-
-
-
-
- float max_load_factor() const
-
-
- Unordered Associative Container (tr1)
-
-
- The maximum load factor before increasing the number of buckets in
- the sparse_hash_set.
-
-
-
-
-
- void max_load_factor(float new_grow)
-
-
- Unordered Associative Container (tr1)
-
-
- Sets the maximum load factor before increasing the number of
- buckets in the sparse_hash_set.
-
-
-
-
-
- float min_load_factor() const
-
-
- sparse_hash_set
-
-
- The minimum load factor before decreasing the number of buckets in
- the sparse_hash_set.
-
-
-
-
-
- void min_load_factor(float new_grow)
-
-
- sparse_hash_set
-
-
- Sets the minimum load factor before decreasing the number of
- buckets in the sparse_hash_set.
-
-
-
-
-
- void set_resizing_parameters(float shrink, float grow)
-
-
- sparse_hash_set
-
-
- DEPRECATED. See below.
-
-
-
-
-
- void resize(size_type n)
-
-
- Hashed
- Associative Container
-
-
- Increases the bucket count to hold at least n items.
- [2] [3]
-
-
-
-
-
- void rehash(size_type n)
-
-
- Unordered Associative Container (tr1)
-
-
- Increases the bucket count to hold at least n items.
- This is identical to resize.
- [2] [3]
-
-
-
-
-
- hasher hash_funct() const
-
-
- Hashed
- Associative Container
-
-
- Returns the hasher object used by the sparse_hash_set.
-
-
-
-
-
- hasher hash_function() const
-
-
- Unordered Associative Container (tr1)
-
-
- Returns the hasher object used by the sparse_hash_set.
- This is idential to hash_funct.
-
-
-
-
-
- key_equal key_eq() const
-
-
- Hashed
- Associative Container
-
-
- Returns the key_equal object used by the
- sparse_hash_set.
-
-
-
-
-
- sparse_hash_set()
-
-
- Container
-
-
- Creates an empty sparse_hash_set.
-
-
-
-
-
- sparse_hash_set(size_type n)
-
-
- Hashed
- Associative Container
-
-
- Creates an empty sparse_hash_set that's optimized for holding
- up to n items.
- [3]
-
-
-
-
-
- sparse_hash_set(size_type n, const hasher& h)
-
-
- Hashed
- Associative Container
-
-
- Creates an empty sparse_hash_set that's optimized for up
- to n items, using h as the hash function.
-
-
-
-
-
- sparse_hash_set(size_type n, const hasher& h, const
- key_equal& k)
-
-
- Hashed
- Associative Container
-
-
- Creates an empty sparse_hash_set that's optimized for up
- to n items, using h as the hash function and
- k as the key equal function.
-
-
-
-
-
- template <class InputIterator>
-sparse_hash_set(InputIterator f, InputIterator l)
-[2]
-
-
- Unique
- Hashed Associative Container
-
-
- Creates a sparse_hash_set with a copy of a range.
-
-
-
-
-
- template <class InputIterator>
-sparse_hash_set(InputIterator f, InputIterator l, size_type n)
-[2]
-
-
- Unique
- Hashed Associative Container
-
-
- Creates a hash_set with a copy of a range that's optimized to
- hold up to n items.
-
-
-
-
-
- template <class InputIterator>
-sparse_hash_set(InputIterator f, InputIterator l, size_type n, const
-hasher& h)
[2]
-
-
- Unique
- Hashed Associative Container
-
-
- Creates a hash_set with a copy of a range that's optimized to hold
- up to n items, using h as the hash function.
-
-
-
-
-
- template <class InputIterator>
-sparse_hash_set(InputIterator f, InputIterator l, size_type n, const
-hasher& h, const key_equal& k)
[2]
-
-
- Unique
- Hashed Associative Container
-
-
- Creates a hash_set with a copy of a range that's optimized for
- holding up to n items, using h as the hash
- function and k as the key equal function.
-
-
-
-
-
- sparse_hash_set(const hash_set&)
-
-
- Container
-
-
- The copy constructor.
-
-
-
-
-
- sparse_hash_set& operator=(const hash_set&)
-
-
- Container
-
-
- The assignment operator
-
-
-
-
-
- void swap(hash_set&)
-
-
- Container
-
-
- Swaps the contents of two hash_sets.
-
-
-
-
-
- pair<iterator, bool> insert(const value_type& x)
-
-
-
- Unique
- Associative Container
-
-
- Inserts x into the sparse_hash_set.
-
-
-
-
-
- template <class InputIterator>
-void insert(InputIterator f, InputIterator l)
[2]
-
-
- Unique
- Associative Container
-
-
- Inserts a range into the sparse_hash_set.
-
-
-
-
-
- void set_deleted_key(const key_type& key) [4]
-
-
- sparse_hash_set
-
-
- See below.
-
-
-
-
-
- void clear_deleted_key() [4]
-
-
- sparse_hash_set
-
-
- See below.
-
-
-
-
-
- void erase(iterator pos)
-
-
- Associative
- Container
-
-
- Erases the element pointed to by pos.
- [4]
-
-
-
-
-
- size_type erase(const key_type& k)
-
-
- Associative
- Container
-
-
- Erases the element whose key is k.
- [4]
-
-
-
-
-
- void erase(iterator first, iterator last)
-
-
- Associative
- Container
-
-
- Erases all elements in a range.
- [4]
-
-
-
-
-
- void clear()
-
-
- Associative
- Container
-
-
- Erases all of the elements.
-
-
-
-
-
- iterator find(const key_type& k) const
-
-
- Associative
- Container
-
-
- Finds an element whose key is k.
-
-
-
-
-
- size_type count(const key_type& k) const
-
-
- Unique
- Associative Container
-
-
- Counts the number of elements whose key is k.
-
-
-
-
-
- pair<iterator, iterator> equal_range(const
-key_type& k) const
-
-
- Associative
- Container
-
-
- Finds a range containing all elements whose key is k.
-
-
-
-
-
- bool write_metadata(FILE *fp)
-
-
- sparse_hash_set
-
-
- See below.
-
-
-
-
-
- bool read_metadata(FILE *fp)
-
-
- sparse_hash_set
-
-
- See below.
-
-
-
-
-
- bool write_nopointer_data(FILE *fp)
-
-
- sparse_hash_set
-
-
- See below.
-
-
-
-
-
- bool read_nopointer_data(FILE *fp)
-
-
- sparse_hash_set
-
-
- See below.
-
-
-
-
-
- bool operator==(const hash_set&, const hash_set&)
-
-
-
- Hashed
- Associative Container
-
-
- Tests two hash_sets for equality. This is a global function, not a
- member function.
-
-
-
-
-
-
-New members
-
-These members are not defined in the Unique
-Hashed Associative Container, Simple
-Associative Container, or tr1's Unordered Associative
-Container requirements, but are specific to
-sparse_hash_set.
-
-
-Member Description
-
-
-
- void set_deleted_key(const key_type& key)
-
-
- Sets the distinguished "deleted" key to key. This must be
- called before any calls to erase(). [4]
-
-
-
-
-
- void clear_deleted_key()
-
-
- Clears the distinguished "deleted" key. After this is called,
- calls to erase() are not valid on this object.
- [4]
-
-
-
-
- void set_resizing_parameters(float shrink, float grow)
-
-
- This function is DEPRECATED. It is equivalent to calling
- min_load_factor(shrink); max_load_factor(grow).
-
-
-
-
-
- bool write_metadata(FILE *fp)
-
-
- Write hashtable metadata to fp. See below.
-
-
-
-
-
- bool read_metadata(FILE *fp)
-
-
- Read hashtable metadata from fp. See below.
-
-
-
-
-
- bool write_nopointer_data(FILE *fp)
-
-
- Write hashtable contents to fp. This is valid only if the
- hashtable key and value are "plain" data. See below.
-
-
-
-
-
- bool read_nopointer_data(FILE *fp)
-
-
- Read hashtable contents to fp. This is valid only if the
- hashtable key and value are "plain" data. See below.
-
-
-
-
-
-
-Notes
-
-[1]
-
-This member function relies on member template functions, which
-may not be supported by all compilers. If your compiler supports
-member templates, you can call this function with any type of input
-iterator. If your compiler does not yet support member templates,
-though, then the arguments must either be of type const
-value_type* or of type sparse_hash_set::const_iterator.
-
-[2]
-
-In order to preserve iterators, erasing hashtable elements does not
-cause a hashtable to resize. This means that after a string of
-erase() calls, the hashtable will use more space than is
-required. At a cost of invalidating all current iterators, you can
-call resize() to manually compact the hashtable. The
-hashtable promotes too-small resize() arguments to the
-smallest legal value, so to compact a hashtable, it's sufficient to
-call resize(0).
-
-
[3]
-
-Unlike some other hashtable implementations, the optional n in
-the calls to the constructor, resize, and rehash
-indicates not the desired number of buckets that
-should be allocated, but instead the expected number of items to be
-inserted. The class then sizes the hash-set appropriately for the
-number of items specified. It's not an error to actually insert more
-or fewer items into the hashtable, but the implementation is most
-efficient -- does the fewest hashtable resizes -- if the number of
-inserted items is n or slightly less.
-
-[4]
-
-sparse_hash_set requires you call
-set_deleted_key() before calling erase(). (This is
-the largest difference between the sparse_hash_set API and
-other hash-set APIs. See implementation.html
-for why this is necessary.)
-The argument to set_deleted_key() should be a key-value that
-is never used for legitimate hash-set entries. It is an error to call
-erase() without first calling set_deleted_key(), and
-it is also an error to call insert() with an item whose key
-is the "deleted key."
-
-There is no need to call set_deleted_key if you do not
-wish to call erase() on the hash-set.
-
-It is acceptable to change the deleted-key at any time by calling
-set_deleted_key() with a new argument. You can also call
-clear_deleted_key(), at which point all keys become valid for
-insertion but no hashtable entries can be deleted until
-set_deleted_key() is called again.
-
-
-Input/Output
-
-It is possible to save and restore sparse_hash_set objects
-to disk. Storage takes place in two steps. The first writes the
-hashtable metadata. The second writes the actual data.
-
-To write a hashtable to disk, first call write_metadata()
-on an open file pointer. This saves the hashtable information in a
-byte-order-independent format.
-
-After the metadata has been written to disk, you must write the
-actual data stored in the hash-set to disk. If both the key and data
-are "simple" enough, you can do this by calling
-write_nopointer_data(). "Simple" data is data that can be
-safely copied to disk via fwrite(). Native C data types fall
-into this category, as do structs of native C data types. Pointers
-and STL objects do not.
-
-Note that write_nopointer_data() does not do any endian
-conversion. Thus, it is only appropriate when you intend to read the
-data on the same endian architecture as you write the data.
-
-If you cannot use write_nopointer_data() for any reason,
-you can write the data yourself by iterating over the
-sparse_hash_set with a const_iterator and writing
-the key and data in any manner you wish.
-
-To read the hashtable information from disk, first you must create
-a sparse_hash_set object. Then open a file pointer to point
-to the saved hashtable, and call read_metadata(). If you
-saved the data via write_nopointer_data(), you can follow the
-read_metadata() call with a call to
-read_nopointer_data(). This is all that is needed.
-
-If you saved the data through a custom write routine, you must call
-a custom read routine to read in the data. To do this, iterate over
-the sparse_hash_set with an iterator; this operation
-is sensical because the metadata has already been set up. For each
-iterator item, you can read the key and value from disk, and set it
-appropriately. You will need to do a const_cast on the
-iterator, since *it is always const. The
-code might look like this:
-
- for (sparse_hash_set<int*>::iterator it = ht.begin();
- it != ht.end(); ++it) {
- const_cast<int*>(*it) = new int;
- fread(const_cast<int*>(*it), sizeof(int), 1, fp);
- }
-
-
-Here's another example, where the item stored in the hash-set is
-a C++ object with a non-trivial constructor. In this case, you must
-use "placement new" to construct the object at the correct memory
-location.
-
- for (sparse_hash_set<ComplicatedClass>::iterator it = ht.begin();
- it != ht.end(); ++it) {
- int ctor_arg; // ComplicatedClass takes an int as its constructor arg
- fread(&ctor_arg, sizeof(int), 1, fp);
- new (const_cast<ComplicatedClass*>(&(*it))) ComplicatedClass(ctor_arg);
- }
-
-
-
-Validity of Iterators
-
-erase() is guaranteed not to invalidate any iterators --
-except for any iterators pointing to the item being erased, of course.
-insert() invalidates all iterators, as does
-resize().
-
-This is implemented by making erase() not resize the
-hashtable. If you desire maximum space efficiency, you can call
-resize(0) after a string of erase() calls, to force
-the hashtable to resize to the smallest possible size.
-
-In addition to invalidating iterators, insert()
-and resize() invalidate all pointers into the hashtable. If
-you want to store a pointer to an object held in a sparse_hash_set,
-either do so after finishing hashtable inserts, or store the object on
-the heap and a pointer to it in the sparse_hash_set.
-
-
-See also
-
-The following are SGI STL, and some Google STL, concepts and
-classes related to sparse_hash_set.
-
-hash_set,
-Associative Container,
-Hashed Associative Container,
-Simple Associative Container,
-Unique Hashed Associative Container,
-set,
-map
-multiset,
-multimap,
-hash_map,
-hash_multiset,
-hash_multimap,
-sparsetable,
-sparse_hash_map,
-dense_hash_set,
-dense_hash_map
-
-
-
diff --git a/src/sparsehash-1.6/doc/sparsetable.html b/src/sparsehash-1.6/doc/sparsetable.html
deleted file mode 100644
index d8c8364..0000000
--- a/src/sparsehash-1.6/doc/sparsetable.html
+++ /dev/null
@@ -1,1393 +0,0 @@
-
-
-
-
-
-sparsetable<T, GROUP_SIZE>
-
-
-
-
-[Note: this document is formatted similarly to the SGI STL
-implementation documentation pages, and refers to concepts and classes
-defined there. However, neither this document nor the code it
-describes is associated with SGI, nor is it necessary to have SGI's
-STL implementation installed in order to use this class.]
-
-sparsetable<T, GROUP_SIZE>
-
-A sparsetable is a Random
-Access Container that supports constant time random access to
-elements, and constant time insertion and removal of elements. It
-implements the "array" or "table" abstract data type. The number of
-elements in a sparsetable is set at constructor time, though
-you can change it at any time by calling resize().
-
-sparsetable is distinguished from other array
-implementations, including the default C implementation, in its stingy
-use of memory -- in particular, unused array elements require only 1 bit
-of disk space to store, rather than sizeof(T) bytes -- and by
-the ability to save and restore contents to disk. On the other hand,
-this array implementation, while still efficient, is slower than other
-array implementations.
-
-
-A sparsetable distinguishes between table elements that
-have been assigned and those that are unassigned.
-Assigned table elements are those that have had a value set via
-set(), operator(), assignment via an iterator, and
-so forth. Unassigned table elements are those that have not had a
-value set in one of these ways, or that have been explicitly
-unassigned via a call to erase() or clear(). Lookup
-is valid on both assigned and unassigned table elements; for
-unassigned elements, lookup returns the default value
-T().
-
-
-This class is appropriate for applications that need to store large
-arrays in memory, or for applications that need these arrays to be
-persistent.
-
-
-Example
-
-
-#include <google/sparsetable>
-
-using google::sparsetable; // namespace where class lives by default
-
-sparsetable<int> t(100);
-t[5] = 6;
-cout << "t[5] = " << t[5];
-cout << "Default value = " << t[99];
-
-
-
-Definition
-
-Defined in the header sparsetable. This
-class is not part of the C++ standard.
-
-
-Template parameters
-
-
-Parameter Description Default
-
-
-
- T
-
-
- The sparsetable's value type: the type of object that is stored in
- the table.
-
-
-
-
-
-
-
-
- GROUP_SIZE
-
-
- The number of elements in each sparsetable group (see the implementation doc for more details
- on this value). This almost never need be specified; the default
- template parameter value works well in all situations.
-
-
-
-
-
-
-
-
-
-Model of
-
-Random Access Container
-
-
-Type requirements
-
-None, except for those imposed by the requirements of
-Random
-Access Container
-
-
-Public base classes
-
-None.
-
-
-Members
-
-
-Member Where defined Description
-
-
-
- value_type
-
-
- Container
-
-
- The type of object, T, stored in the table.
-
-
-
-
-
- pointer
-
-
- Container
-
-
- Pointer to T.
-
-
-
-
-
- reference
-
-
- Container
-
-
- Reference to T.
-
-
-
-
-
- const_reference
-
-
- Container
-
-
- Const reference to T.
-
-
-
-
-
- size_type
-
-
- Container
-
-
- An unsigned integral type.
-
-
-
-
-
- difference_type
-
-
- Container
-
-
- A signed integral type.
-
-
-
-
-
- iterator
-
-
- Container
-
-
- Iterator used to iterate through a sparsetable.
-
-
-
-
-
- const_iterator
-
-
- Container
-
-
- Const iterator used to iterate through a sparsetable.
-
-
-
-
-
- reverse_iterator
-
-
- Reversible
- Container
-
-
- Iterator used to iterate backwards through a sparsetable.
-
-
-
-
-
- const_reverse_iterator
-
-
- Reversible
- Container
-
-
- Const iterator used to iterate backwards through a
- sparsetable.
-
-
-
-
-
- nonempty_iterator
-
-
- sparsetable
-
-
- Iterator used to iterate through the
- assigned elements of the
- sparsetable.
-
-
-
-
-
- const_nonempty_iterator
-
-
- sparsetable
-
-
- Const iterator used to iterate through the
- assigned elements of the
- sparsetable.
-
-
-
-
-
- reverse_nonempty_iterator
-
-
- sparsetable
-
-
- Iterator used to iterate backwards through the
- assigned elements of the
- sparsetable.
-
-
-
-
-
- const_reverse_nonempty_iterator
-
-
- sparsetable
-
-
- Const iterator used to iterate backwards through the
- assigned elements of the
- sparsetable.
-
-
-
-
-
- destructive_iterator
-
-
- sparsetable
-
-
- Iterator used to iterate through the
- assigned elements of the
- sparsetable, erasing elements as it iterates.
- [1]
-
-
-
-
-
- iterator begin()
-
-
- Container
-
-
- Returns an iterator pointing to the beginning of the
- sparsetable.
-
-
-
-
-
- iterator end()
-
-
- Container
-
-
- Returns an iterator pointing to the end of the
- sparsetable.
-
-
-
-
-
- const_iterator begin() const
-
-
- Container
-
-
- Returns an const_iterator pointing to the beginning of the
- sparsetable.
-
-
-
-
-
- const_iterator end() const
-
-
- Container
-
-
- Returns an const_iterator pointing to the end of the
- sparsetable.
-
-
-
-
-
- reverse_iterator rbegin()
-
-
- Reversible
- Container
-
-
- Returns a reverse_iterator pointing to the beginning of the
- reversed sparsetable.
-
-
-
-
-
- reverse_iterator rend()
-
-
- Reversible
- Container
-
-
- Returns a reverse_iterator pointing to the end of the
- reversed sparsetable.
-
-
-
-
-
- const_reverse_iterator rbegin() const
-
-
- Reversible
- Container
-
-
- Returns a const_reverse_iterator pointing to the beginning
- of the reversed sparsetable.
-
-
-
-
-
- const_reverse_iterator rend() const
-
-
- Reversible
- Container
-
-
- Returns a const_reverse_iterator pointing to the end of
- the reversed sparsetable.
-
-
-
-
-
- nonempty_iterator nonempty_begin()
-
-
- sparsetable
-
-
- Returns a nonempty_iterator pointing to the first
- assigned element of the
- sparsetable.
-
-
-
-
-
- nonempty_iterator nonempty_end()
-
-
- sparsetable
-
-
- Returns a nonempty_iterator pointing to the end of the
- sparsetable.
-
-
-
-
-
- const_nonempty_iterator nonempty_begin() const
-
-
- sparsetable
-
-
- Returns a const_nonempty_iterator pointing to the first
- assigned element of the
- sparsetable.
-
-
-
-
-
- const_nonempty_iterator nonempty_end() const
-
-
- sparsetable
-
-
- Returns a const_nonempty_iterator pointing to the end of
- the sparsetable.
-
-
-
-
-
- reverse_nonempty_iterator nonempty_rbegin()
-
-
- sparsetable
-
-
- Returns a reverse_nonempty_iterator pointing to the first
- assigned element of the reversed
- sparsetable.
-
-
-
-
-
- reverse_nonempty_iterator nonempty_rend()
-
-
- sparsetable
-
-
- Returns a reverse_nonempty_iterator pointing to the end of
- the reversed sparsetable.
-
-
-
-
-
- const_reverse_nonempty_iterator nonempty_rbegin() const
-
-
- sparsetable
-
-
- Returns a const_reverse_nonempty_iterator pointing to the
- first assigned element of the reversed
- sparsetable.
-
-
-
-
-
- const_reverse_nonempty_iterator nonempty_rend() const
-
-
- sparsetable
-
-
- Returns a const_reverse_nonempty_iterator pointing to the
- end of the reversed sparsetable.
-
-
-
-
-
- destructive_iterator destructive_begin()
-
-
- sparsetable
-
-
- Returns a destructive_iterator pointing to the first
- assigned element of the
- sparsetable.
-
-
-
-
-
- destructive_iterator destructive_end()
-
-
- sparsetable
-
-
- Returns a destructive_iterator pointing to the end of
- the sparsetable.
-
-
-
-
-
- size_type size() const
-
-
- Container
-
-
- Returns the size of the sparsetable.
-
-
-
-
-
- size_type max_size() const
-
-
- Container
-
-
- Returns the largest possible size of the sparsetable.
-
-
-
-
-
- bool empty() const
-
-
- Container
-
-
- true if the sparsetable's size is 0.
-
-
-
-
-
- size_type num_nonempty() const
-
-
- sparsetable
-
-
- Returns the number of sparsetable elements that are currently assigned.
-
-
-
-
-
- sparsetable(size_type n)
-
-
- Container
-
-
- Creates a sparsetable with n elements.
-
-
-
-
-
- sparsetable(const sparsetable&)
-
-
- Container
-
-
- The copy constructor.
-
-
-
-
-
- ~sparsetable()
-
-
- Container
-
-
- The destructor.
-
-
-
-
-
- sparsetable& operator=(const sparsetable&)
-
-
- Container
-
-
- The assignment operator
-
-
-
-
-
- void swap(sparsetable&)
-
-
- Container
-
-
- Swaps the contents of two sparsetables.
-
-
-
-
-
- reference operator[](size_type n)
-
-
- Random
- Access Container
-
-
- Returns the n'th element. [2]
-
-
-
-
-
- const_reference operator[](size_type n) const
-
-
- Random
- Access Container
-
-
- Returns the n'th element.
-
-
-
-
-
- bool test(size_type i) const
-
-
- sparsetable
-
-
- true if the i'th element of the sparsetable is assigned.
-
-
-
-
-
- bool test(iterator pos) const
-
-
- sparsetable
-
-
- true if the sparsetable element pointed to by pos
- is assigned.
-
-
-
-
-
- bool test(const_iterator pos) const
-
-
- sparsetable
-
-
- true if the sparsetable element pointed to by pos
- is assigned.
-
-
-
-
-
- const_reference get(size_type i) const
-
-
- sparsetable
-
-
- returns the i'th element of the sparsetable.
-
-
-
-
-
- reference set(size_type i, const_reference val)
-
-
- sparsetable
-
-
- Sets the i'th element of the sparsetable to value
- val.
-
-
-
-
-
- void erase(size_type i)
-
-
- sparsetable
-
-
- Erases the i'th element of the sparsetable.
-
-
-
-
-
- void erase(iterator pos)
-
-
- sparsetable
-
-
- Erases the element of the sparsetable pointed to by
- pos.
-
-
-
-
-
- void erase(iterator first, iterator last)
-
-
- sparsetable
-
-
- Erases the elements of the sparsetable in the range
- [first, last).
-
-
-
-
-
- void clear()
-
-
- sparsetable
-
-
- Erases all of the elements.
-
-
-
-
-
- void resize(size_type n)
-
-
- sparsetable
-
-
- Changes the size of sparsetable to n.
-
-
-
-
-
- bool write_metadata(FILE *fp)
-
-
- sparsetable
-
-
- See below.
-
-
-
-
-
- bool read_metadata(FILE *fp)
-
-
- sparsetable
-
-
- See below.
-
-
-
-
-
- bool write_nopointer_data(FILE *fp)
-
-
- sparsetable
-
-
- See below.
-
-
-
-
-
- bool read_nopointer_data(FILE *fp)
-
-
- sparsetable
-
-
- See below.
-
-
-
-
-
- bool operator==(const sparsetable&, const sparsetable&)
-
-
-
- Forward
- Container
-
-
- Tests two sparsetables for equality. This is a global function,
- not a member function.
-
-
-
-
-
- bool operator<(const sparsetable&, const sparsetable&)
-
-
-
- Forward
- Container
-
-
- Lexicographical comparison. This is a global function,
- not a member function.
-
-
-
-
-
-
-New members
-
-These members are not defined in the Random
-Access Container requirement, but are specific to
-sparsetable.
-
-
-Member Description
-
-
-
- nonempty_iterator
-
-
- Iterator used to iterate through the
- assigned elements of the
- sparsetable.
-
-
-
-
-
- const_nonempty_iterator
-
-
- Const iterator used to iterate through the
- assigned elements of the
- sparsetable.
-
-
-
-
-
- reverse_nonempty_iterator
-
-
- Iterator used to iterate backwards through the
- assigned elements of the
- sparsetable.
-
-
-
-
-
- const_reverse_nonempty_iterator
-
-
- Const iterator used to iterate backwards through the
- assigned elements of the
- sparsetable.
-
-
-
-
-
- destructive_iterator
-
-
- Iterator used to iterate through the
- assigned elements of the
- sparsetable, erasing elements as it iterates.
- [1]
-
-
-
-
-
- nonempty_iterator nonempty_begin()
-
-
- Returns a nonempty_iterator pointing to the first
- assigned element of the
- sparsetable.
-
-
-
-
-
- nonempty_iterator nonempty_end()
-
-
- Returns a nonempty_iterator pointing to the end of the
- sparsetable.
-
-
-
-
-
- const_nonempty_iterator nonempty_begin() const
-
-
- Returns a const_nonempty_iterator pointing to the first
- assigned element of the
- sparsetable.
-
-
-
-
-
- const_nonempty_iterator nonempty_end() const
-
-
- Returns a const_nonempty_iterator pointing to the end of
- the sparsetable.
-
-
-
-
-
- reverse_nonempty_iterator nonempty_rbegin()
-
-
- Returns a reverse_nonempty_iterator pointing to the first
- assigned element of the reversed
- sparsetable.
-
-
-
-
-
- reverse_nonempty_iterator nonempty_rend()
-
-
- Returns a reverse_nonempty_iterator pointing to the end of
- the reversed sparsetable.
-
-
-
-
-
- const_reverse_nonempty_iterator nonempty_rbegin() const
-
-
- Returns a const_reverse_nonempty_iterator pointing to the
- first assigned element of the reversed
- sparsetable.
-
-
-
-
-
- const_reverse_nonempty_iterator nonempty_rend() const
-
-
- Returns a const_reverse_nonempty_iterator pointing to the
- end of the reversed sparsetable.
-
-
-
-
-
- destructive_iterator destructive_begin()
-
-
- Returns a destructive_iterator pointing to the first
- assigned element of the
- sparsetable.
-
-
-
-
-
- destructive_iterator destructive_end()
-
-
- Returns a destructive_iterator pointing to the end of
- the sparsetable.
-
-
-
-
-
- size_type num_nonempty() const
-
-
- Returns the number of sparsetable elements that are currently assigned.
-
-
-
-
-
- bool test(size_type i) const
-
-
- true if the i'th element of the sparsetable is assigned.
-
-
-
-
-
- bool test(iterator pos) const
-
-
- true if the sparsetable element pointed to by pos
- is assigned.
-
-
-
-
-
- bool test(const_iterator pos) const
-
-
- true if the sparsetable element pointed to by pos
- is assigned.
-
-
-
-
-
- const_reference get(size_type i) const
-
-
- returns the i'th element of the sparsetable. If
- the i'th element is assigned, the
- assigned value is returned, otherwise, the default value
- T() is returned.
-
-
-
-
-
- reference set(size_type i, const_reference val)
-
-
- Sets the i'th element of the sparsetable to value
- val, and returns a reference to the i'th element
- of the table. This operation causes the i'th element to
- be assigned.
-
-
-
-
-
- void erase(size_type i)
-
-
- Erases the i'th element of the sparsetable. This
- operation causes the i'th element to be unassigned.
-
-
-
-
-
- void erase(iterator pos)
-
-
- Erases the element of the sparsetable pointed to by
- pos. This operation causes the i'th element to
- be unassigned.
-
-
-
-
-
- void erase(iterator first, iterator last)
-
-
- Erases the elements of the sparsetable in the range
- [first, last). This operation causes these elements to
- be unassigned.
-
-
-
-
-
- void clear()
-
-
- Erases all of the elements. This causes all elements to be
- unassigned.
-
-
-
-
-
- void resize(size_type n)
-
-
- Changes the size of sparsetable to n. If n is
- greater than the old size, new, unassigned
- elements are appended. If n is less than the old size,
- all elements in position >n are deleted.
-
-
-
-
-
- bool write_metadata(FILE *fp)
-
-
- Write hashtable metadata to fp. See below.
-
-
-
-
-
- bool read_metadata(FILE *fp)
-
-
- Read hashtable metadata from fp. See below.
-
-
-
-
-
- bool write_nopointer_data(FILE *fp)
-
-
- Write hashtable contents to fp. This is valid only if the
- hashtable key and value are "plain" data. See below.
-
-
-
-
-
- bool read_nopointer_data(FILE *fp)
-
-
- Read hashtable contents to fp. This is valid only if the
- hashtable key and value are "plain" data. See below.
-
-
-
-
-
-
-Notes
-
-[1]
-
-sparsetable::destructive_iterator iterates through a
-sparsetable like a normal iterator, but ++it may delete the
-element being iterated past. Obviously, this iterator can only be
-used once on a given table! One application of this iterator is to
-copy data from a sparsetable to some other data structure without
-using extra memory to store the data in both places during the
-copy.
-
-[2]
-
-Since operator[] might insert a new element into the
-sparsetable, it can't possibly be a const member
-function. In theory, since it might insert a new element, it should
-cause the element it refers to to become assigned. However, this is undesirable when
-operator[] is used to examine elements, rather than assign
-them. Thus, as an implementation trick, operator[] does not
-really return a reference. Instead it returns an object that
-behaves almost exactly like a reference. This object,
-however, delays setting the appropriate sparsetable element to assigned to when it is actually assigned to.
-
-For a bit more detail: the object returned by operator[]
-is an opaque type which defines operator=, operator
-reference(), and operator&. The first operator controls
-assigning to the value. The second controls examining the value. The
-third controls pointing to the value.
-
-All three operators perform exactly as an object of type
-reference would perform. The only problems that arise is
-when this object is accessed in situations where C++ cannot do the
-conversion by default. By far the most common situation is with
-variadic functions such as printf. In such situations, you
-may need to manually cast the object to the right type:
-
- printf("%d", static_cast<typename table::reference>(table[i]));
-
-
-
-Input/Output
-
-It is possible to save and restore sparsetable objects
-to disk. Storage takes place in two steps. The first writes the
-table metadata. The second writes the actual data.
-
-To write a sparsetable to disk, first call write_metadata()
-on an open file pointer. This saves the sparsetable information in a
-byte-order-independent format.
-
-After the metadata has been written to disk, you must write the
-actual data stored in the sparsetable to disk. If the value is
-"simple" enough, you can do this by calling
-write_nopointer_data(). "Simple" data is data that can be
-safely copied to disk via fwrite(). Native C data types fall
-into this category, as do structs of native C data types. Pointers
-and STL objects do not.
-
-Note that write_nopointer_data() does not do any endian
-conversion. Thus, it is only appropriate when you intend to read the
-data on the same endian architecture as you write the data.
-
-If you cannot use write_nopointer_data() for any reason,
-you can write the data yourself by iterating over the
-sparsetable with a const_nonempty_iterator and
-writing the key and data in any manner you wish.
-
-To read the hashtable information from disk, first you must create
-a sparsetable object. Then open a file pointer to point
-to the saved sparsetable, and call read_metadata(). If you
-saved the data via write_nopointer_data(), you can follow the
-read_metadata() call with a call to
-read_nopointer_data(). This is all that is needed.
-
-If you saved the data through a custom write routine, you must call
-a custom read routine to read in the data. To do this, iterate over
-the sparsetable with a nonempty_iterator; this
-operation is sensical because the metadata has already been set up.
-For each iterator item, you can read the key and value from disk, and
-set it appropriately. The code might look like this:
-
- for (sparsetable<int*>::nonempty_iterator it = t.nonempty_begin();
- it != t.nonempty_end(); ++it) {
- *it = new int;
- fread(*it, sizeof(int), 1, fp);
- }
-
-
-Here's another example, where the item stored in the sparsetable is
-a C++ object with a non-trivial constructor. In this case, you must
-use "placement new" to construct the object at the correct memory
-location.
-
- for (sparsetable<ComplicatedCppClass>::nonempty_iterator it = t.nonempty_begin();
- it != t.nonempty_end(); ++it) {
- int constructor_arg; // ComplicatedCppClass takes an int to construct
- fread(&constructor_arg, sizeof(int), 1, fp);
- new (&(*it)) ComplicatedCppClass(constructor_arg); // placement new
- }
-
-
-
-See also
-
-The following are SGI STL concepts and classes related to
-sparsetable.
-
-Container,
-Random Access Container,
-sparse_hash_set,
-sparse_hash_map
-
-
-
diff --git a/src/sparsehash-1.6/experimental/Makefile b/src/sparsehash-1.6/experimental/Makefile
deleted file mode 100644
index aa997f7..0000000
--- a/src/sparsehash-1.6/experimental/Makefile
+++ /dev/null
@@ -1,9 +0,0 @@
-example: example.o libchash.o
- $(CC) $(CFLAGS) $(LDFLAGS) -o $@ $^
-
-.SUFFIXES: .c .o .h
-.c.o:
- $(CC) -c $(CPPFLAGS) $(CFLAGS) -o $@ $<
-
-example.o: example.c libchash.h
-libchash.o: libchash.c libchash.h
diff --git a/src/sparsehash-1.6/experimental/README b/src/sparsehash-1.6/experimental/README
deleted file mode 100644
index 150161d..0000000
--- a/src/sparsehash-1.6/experimental/README
+++ /dev/null
@@ -1,14 +0,0 @@
-This is a C version of sparsehash (and also, maybe, densehash) that I
-wrote way back when, and served as the inspiration for the C++
-version. The API for the C version is much uglier than the C++,
-because of the lack of template support. I believe the class works,
-but I'm not convinced it's really flexible or easy enough to use.
-
-It would be nice to rework this C class to follow the C++ API as
-closely as possible (eg have a set_deleted_key() instead of using a
-#define like this code does now). I believe the code compiles and
-runs, if anybody is interested in using it now, but it's subject to
-major change in the future, as people work on it.
-
-Craig Silverstein
-20 March 2005
diff --git a/src/sparsehash-1.6/experimental/example.c b/src/sparsehash-1.6/experimental/example.c
deleted file mode 100644
index 38a3265..0000000
--- a/src/sparsehash-1.6/experimental/example.c
+++ /dev/null
@@ -1,54 +0,0 @@
-#include
-#include
-#include
-#include "libchash.h"
-
-static void TestInsert() {
- struct HashTable* ht;
- HTItem* bck;
-
- ht = AllocateHashTable(1, 0); /* value is 1 byte, 0: don't copy keys */
-
- HashInsert(ht, PTR_KEY(ht, "January"), 31); /* 0: don't overwrite old val */
- bck = HashInsert(ht, PTR_KEY(ht, "February"), 28);
- bck = HashInsert(ht, PTR_KEY(ht, "March"), 31);
-
- bck = HashFind(ht, PTR_KEY(ht, "February"));
- assert(bck);
- assert(bck->data == 28);
-
- FreeHashTable(ht);
-}
-
-static void TestFindOrInsert() {
- struct HashTable* ht;
- int i;
- int iterations = 1000000;
- int range = 30; /* random number between 1 and 30 */
-
- ht = AllocateHashTable(4, 0); /* value is 4 bytes, 0: don't copy keys */
-
- /* We'll test how good rand() is as a random number generator */
- for (i = 0; i < iterations; ++i) {
- int key = rand() % range;
- HTItem* bck = HashFindOrInsert(ht, key, 0); /* initialize to 0 */
- bck->data++; /* found one more of them */
- }
-
- for (i = 0; i < range; ++i) {
- HTItem* bck = HashFind(ht, i);
- if (bck) {
- printf("%3d: %d\n", bck->key, bck->data);
- } else {
- printf("%3d: 0\n", i);
- }
- }
-
- FreeHashTable(ht);
-}
-
-int main(int argc, char** argv) {
- TestInsert();
- TestFindOrInsert();
- return 0;
-}
diff --git a/src/sparsehash-1.6/experimental/libchash.c b/src/sparsehash-1.6/experimental/libchash.c
deleted file mode 100644
index eff9eeb..0000000
--- a/src/sparsehash-1.6/experimental/libchash.c
+++ /dev/null
@@ -1,1537 +0,0 @@
-/* Copyright (c) 1998 - 2005, Google Inc.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * ---
- * Author: Craig Silverstein
- *
- * This library is intended to be used for in-memory hash tables,
- * though it provides rudimentary permanent-storage capabilities.
- * It attempts to be fast, portable, and small. The best algorithm
- * to fulfill these goals is an internal probing hashing algorithm,
- * as in Knuth, _Art of Computer Programming_, vol III. Unlike
- * chained (open) hashing, it doesn't require a pointer for every
- * item, yet it is still constant time lookup in practice.
- *
- * Also to save space, we let the contents (both data and key) that
- * you insert be a union: if the key/data is small, we store it
- * directly in the hashtable, otherwise we store a pointer to it.
- * To keep you from having to figure out which, use KEY_PTR and
- * PTR_KEY to convert between the arguments to these functions and
- * a pointer to the real data. For instance:
- * char key[] = "ab", *key2;
- * HTItem *bck; HashTable *ht;
- * HashInsert(ht, PTR_KEY(ht, key), 0);
- * bck = HashFind(ht, PTR_KEY(ht, "ab"));
- * key2 = KEY_PTR(ht, bck->key);
- *
- * There are a rich set of operations supported:
- * AllocateHashTable() -- Allocates a hashtable structure and
- * returns it.
- * cchKey: if it's a positive number, then each key is a
- * fixed-length record of that length. If it's 0,
- * the key is assumed to be a \0-terminated string.
- * fSaveKey: normally, you are responsible for allocating
- * space for the key. If this is 1, we make a
- * copy of the key for you.
- * ClearHashTable() -- Removes everything from a hashtable
- * FreeHashTable() -- Frees memory used by a hashtable
- *
- * HashFind() -- takes a key (use PTR_KEY) and returns the
- * HTItem containing that key, or NULL if the
- * key is not in the hashtable.
- * HashFindLast() -- returns the item found by last HashFind()
- * HashFindOrInsert() -- inserts the key/data pair if the key
- * is not already in the hashtable, or
- * returns the appropraite HTItem if it is.
- * HashFindOrInsertItem() -- takes key/data as an HTItem.
- * HashInsert() -- adds a key/data pair to the hashtable. What
- * it does if the key is already in the table
- * depends on the value of SAMEKEY_OVERWRITE.
- * HashInsertItem() -- takes key/data as an HTItem.
- * HashDelete() -- removes a key/data pair from the hashtable,
- * if it's there. RETURNS 1 if it was there,
- * 0 else.
- * If you use sparse tables and never delete, the full data
- * space is available. Otherwise we steal -2 (maybe -3),
- * so you can't have data fields with those values.
- * HashDeleteLast() -- deletes the item returned by the last Find().
- *
- * HashFirstBucket() -- used to iterate over the buckets in a
- * hashtable. DON'T INSERT OR DELETE WHILE
- * ITERATING! You can't nest iterations.
- * HashNextBucket() -- RETURNS NULL at the end of iterating.
- *
- * HashSetDeltaGoalSize() -- if you're going to insert 1000 items
- * at once, call this fn with arg 1000.
- * It grows the table more intelligently.
- *
- * HashSave() -- saves the hashtable to a file. It saves keys ok,
- * but it doesn't know how to interpret the data field,
- * so if the data field is a pointer to some complex
- * structure, you must send a function that takes a
- * file pointer and a pointer to the structure, and
- * write whatever you want to write. It should return
- * the number of bytes written. If the file is NULL,
- * it should just return the number of bytes it would
- * write, without writing anything.
- * If your data field is just an integer, not a
- * pointer, just send NULL for the function.
- * HashLoad() -- loads a hashtable. It needs a function that takes
- * a file and the size of the structure, and expects
- * you to read in the structure and return a pointer
- * to it. You must do memory allocation, etc. If
- * the data is just a number, send NULL.
- * HashLoadKeys() -- unlike HashLoad(), doesn't load the data off disk
- * until needed. This saves memory, but if you look
- * up the same key a lot, it does a disk access each
- * time.
- * You can't do Insert() or Delete() on hashtables that were loaded
- * from disk.
- *
- * See libchash.h for parameters you can modify. Make sure LOG_WORD_SIZE
- * is defined correctly for your machine! (5 for 32 bit words, 6 for 64).
- */
-
-#include
-#include
-#include /* for strcmp, memcmp, etc */
-#include /* ULTRIX needs this for in.h */
-#include /* for reading/writing hashtables */
-#include
-#include "libchash.h" /* all the types */
-
- /* if keys are stored directly but cchKey is less than sizeof(ulong), */
- /* this cuts off the bits at the end */
-char grgKeyTruncMask[sizeof(ulong)][sizeof(ulong)];
-#define KEY_TRUNC(ht, key) \
- ( STORES_PTR(ht) || (ht)->cchKey == sizeof(ulong) \
- ? (key) : ((key) & *(ulong *)&(grgKeyTruncMask[(ht)->cchKey][0])) )
-
- /* round num up to a multiple of wordsize. (LOG_WORD_SIZE-3 is in bytes) */
-#define WORD_ROUND(num) ( ((num-1) | ((1<<(LOG_WORD_SIZE-3))-1)) + 1 )
-#define NULL_TERMINATED 0 /* val of cchKey if keys are null-term strings */
-
- /* Useful operations we do to keys: compare them, copy them, free them */
-
-#define KEY_CMP(ht, key1, key2) ( !STORES_PTR(ht) ? (key1) - (key2) : \
- (key1) == (key2) ? 0 : \
- HashKeySize(ht) == NULL_TERMINATED ? \
- strcmp((char *)key1, (char *)key2) :\
- memcmp((void *)key1, (void *)key2, \
- HashKeySize(ht)) )
-
-#define COPY_KEY(ht, keyTo, keyFrom) do \
- if ( !STORES_PTR(ht) || !(ht)->fSaveKeys ) \
- (keyTo) = (keyFrom); /* just copy pointer or info */\
- else if ( (ht)->cchKey == NULL_TERMINATED ) /* copy 0-term.ed str */\
- { \
- (keyTo) = (ulong)HTsmalloc( WORD_ROUND(strlen((char *)(keyFrom))+1) ); \
- strcpy((char *)(keyTo), (char *)(keyFrom)); \
- } \
- else \
- { \
- (keyTo) = (ulong) HTsmalloc( WORD_ROUND((ht)->cchKey) ); \
- memcpy( (char *)(keyTo), (char *)(keyFrom), (ht)->cchKey); \
- } \
- while ( 0 )
-
-#define FREE_KEY(ht, key) do \
- if ( STORES_PTR(ht) && (ht)->fSaveKeys ) \
- if ( (ht)->cchKey == NULL_TERMINATED ) \
- HTfree((char *)(key), WORD_ROUND(strlen((char *)(key))+1)); \
- else \
- HTfree((char *)(key), WORD_ROUND((ht)->cchKey)); \
- while ( 0 )
-
- /* the following are useful for bitmaps */
- /* Format is like this (if 1 word = 4 bits): 3210 7654 ba98 fedc ... */
-typedef ulong HTBitmapPart; /* this has to be unsigned, for >> */
-typedef HTBitmapPart HTBitmap[1<> LOG_WORD_SIZE) << (LOG_WORD_SIZE-3) )
-#define MOD2(i, logmod) ( (i) & ((1<<(logmod))-1) )
-#define DIV_NUM_ENTRIES(i) ( (i) >> LOG_WORD_SIZE )
-#define MOD_NUM_ENTRIES(i) ( MOD2(i, LOG_WORD_SIZE) )
-#define MODBIT(i) ( ((ulong)1) << MOD_NUM_ENTRIES(i) )
-
-#define TEST_BITMAP(bm, i) ( (bm)[DIV_NUM_ENTRIES(i)] & MODBIT(i) ? 1 : 0 )
-#define SET_BITMAP(bm, i) (bm)[DIV_NUM_ENTRIES(i)] |= MODBIT(i)
-#define CLEAR_BITMAP(bm, i) (bm)[DIV_NUM_ENTRIES(i)] &= ~MODBIT(i)
-
- /* the following are useful for reading and writing hashtables */
-#define READ_UL(fp, data) \
- do { \
- long _ul; \
- fread(&_ul, sizeof(_ul), 1, (fp)); \
- data = ntohl(_ul); \
- } while (0)
-
-#define WRITE_UL(fp, data) \
- do { \
- long _ul = htonl((long)(data)); \
- fwrite(&_ul, sizeof(_ul), 1, (fp)); \
- } while (0)
-
- /* Moves data from disk to memory if necessary. Note dataRead cannot be *
- * NULL, because then we might as well (and do) load the data into memory */
-#define LOAD_AND_RETURN(ht, loadCommand) /* lC returns an HTItem * */ \
- if ( !(ht)->fpData ) /* data is stored in memory */ \
- return (loadCommand); \
- else /* must read data off of disk */ \
- { \
- int cchData; \
- HTItem *bck; \
- if ( (ht)->bckData.data ) free((char *)(ht)->bckData.data); \
- ht->bckData.data = (ulong)NULL; /* needed if loadCommand fails */ \
- bck = (loadCommand); \
- if ( bck == NULL ) /* loadCommand failed: key not found */ \
- return NULL; \
- else \
- (ht)->bckData = *bck; \
- fseek(ht->fpData, (ht)->bckData.data, SEEK_SET); \
- READ_UL((ht)->fpData, cchData); \
- (ht)->bckData.data = (ulong)(ht)->dataRead((ht)->fpData, cchData); \
- return &((ht)->bckData); \
- }
-
-
-/* ======================================================================== */
-/* UTILITY ROUTINES */
-/* ---------------------- */
-
-/* HTsmalloc() -- safe malloc
- * allocates memory, or crashes if the allocation fails.
- */
-static void *HTsmalloc(unsigned long size)
-{
- void *retval;
-
- if ( size == 0 )
- return NULL;
- retval = (void *)malloc(size);
- if ( !retval )
- {
- fprintf(stderr, "HTsmalloc: Unable to allocate %lu bytes of memory\n",
- size);
- exit(1);
- }
- return retval;
-}
-
-/* HTscalloc() -- safe calloc
- * allocates memory and initializes it to 0, or crashes if
- * the allocation fails.
- */
-static void *HTscalloc(unsigned long size)
-{
- void *retval;
-
- retval = (void *)calloc(size, 1);
- if ( !retval && size > 0 )
- {
- fprintf(stderr, "HTscalloc: Unable to allocate %lu bytes of memory\n",
- size);
- exit(1);
- }
- return retval;
-}
-
-/* HTsrealloc() -- safe calloc
- * grows the amount of memory from a source, or crashes if
- * the allocation fails.
- */
-static void *HTsrealloc(void *ptr, unsigned long new_size, long delta)
-{
- if ( ptr == NULL )
- return HTsmalloc(new_size);
- ptr = realloc(ptr, new_size);
- if ( !ptr && new_size > 0 )
- {
- fprintf(stderr, "HTsrealloc: Unable to reallocate %lu bytes of memory\n",
- new_size);
- exit(1);
- }
- return ptr;
-}
-
-/* HTfree() -- keep track of memory use
- * frees memory using free, but updates count of how much memory
- * is being used.
- */
-static void HTfree(void *ptr, unsigned long size)
-{
- if ( size > 0 ) /* some systems seem to not like freeing NULL */
- free(ptr);
-}
-
-/*************************************************************************\
-| HTcopy() |
-| Sometimes we interpret data as a ulong. But ulongs must be |
-| aligned on some machines, so instead of casting we copy. |
-\*************************************************************************/
-
-unsigned long HTcopy(char *ul)
-{
- unsigned long retval;
-
- memcpy(&retval, ul, sizeof(retval));
- return retval;
-}
-
-/*************************************************************************\
-| HTSetupKeyTrunc() |
-| If keys are stored directly but cchKey is less than |
-| sizeof(ulong), this cuts off the bits at the end. |
-\*************************************************************************/
-
-static void HTSetupKeyTrunc(void)
-{
- int i, j;
-
- for ( i = 0; i < sizeof(unsigned long); i++ )
- for ( j = 0; j < sizeof(unsigned long); j++ )
- grgKeyTruncMask[i][j] = j < i ? 255 : 0; /* chars have 8 bits */
-}
-
-
-/* ======================================================================== */
-/* TABLE ROUTINES */
-/* -------------------- */
-
-/* The idea is that a hashtable with (logically) t buckets is divided
- * into t/M groups of M buckets each. (M is a constant set in
- * LOG_BM_WORDS for efficiency.) Each group is stored sparsely.
- * Thus, inserting into the table causes some array to grow, which is
- * slow but still constant time. Lookup involves doing a
- * logical-position-to-sparse-position lookup, which is also slow but
- * constant time. The larger M is, the slower these operations are
- * but the less overhead (slightly).
- *
- * To store the sparse array, we store a bitmap B, where B[i] = 1 iff
- * bucket i is non-empty. Then to look up bucket i we really look up
- * array[# of 1s before i in B]. This is constant time for fixed M.
- *
- * Terminology: the position of an item in the overall table (from
- * 1 .. t) is called its "location." The logical position in a group
- * (from 1 .. M ) is called its "position." The actual location in
- * the array (from 1 .. # of non-empty buckets in the group) is
- * called its "offset."
- *
- * The following operations are supported:
- * o Allocate an array with t buckets, all empty
- * o Free a array (but not whatever was stored in the buckets)
- * o Tell whether or not a bucket is empty
- * o Return a bucket with a given location
- * o Set the value of a bucket at a given location
- * o Iterate through all the buckets in the array
- * o Read and write an occupancy bitmap to disk
- * o Return how much memory is being allocated by the array structure
- */
-
-#ifndef SparseBucket /* by default, each bucket holds an HTItem */
-#define SparseBucket HTItem
-#endif
-
-typedef struct SparseBin {
- SparseBucket *binSparse;
- HTBitmap bmOccupied; /* bmOccupied[i] is 1 if bucket i has an item */
- short cOccupied; /* size of binSparse; useful for iterators, eg */
-} SparseBin;
-
-typedef struct SparseIterator {
- long posGroup;
- long posOffset;
- SparseBin *binSparse; /* state info, to avoid args for NextBucket() */
- ulong cBuckets;
-} SparseIterator;
-
-#define LOG_LOW_BIN_SIZE ( LOG_BM_WORDS+LOG_WORD_SIZE )
-#define SPARSE_GROUPS(cBuckets) ( (((cBuckets)-1) >> LOG_LOW_BIN_SIZE) + 1 )
-
- /* we need a small function to figure out # of items set in the bm */
-static HTOffset EntriesUpto(HTBitmapPart *bm, int i)
-{ /* returns # of set bits in 0..i-1 */
- HTOffset retval = 0;
- static HTOffset rgcBits[256] = /* # of bits set in one char */
- {0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4,
- 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
- 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
- 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
- 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
- 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
- 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
- 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
- 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
- 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
- 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
- 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
- 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
- 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
- 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
- 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8};
-
- if ( i == 0 ) return 0;
- for ( ; i > sizeof(*bm)*8; i -= sizeof(*bm)*8, bm++ )
- { /* think of it as loop unrolling */
-#if LOG_WORD_SIZE >= 3 /* 1 byte per word, or more */
- retval += rgcBits[*bm & 255]; /* get the low byte */
-#if LOG_WORD_SIZE >= 4 /* at least 2 bytes */
- retval += rgcBits[(*bm >> 8) & 255];
-#if LOG_WORD_SIZE >= 5 /* at least 4 bytes */
- retval += rgcBits[(*bm >> 16) & 255];
- retval += rgcBits[(*bm >> 24) & 255];
-#if LOG_WORD_SIZE >= 6 /* 8 bytes! */
- retval += rgcBits[(*bm >> 32) & 255];
- retval += rgcBits[(*bm >> 40) & 255];
- retval += rgcBits[(*bm >> 48) & 255];
- retval += rgcBits[(*bm >> 56) & 255];
-#if LOG_WORD_SIZE >= 7 /* not a concern for a while... */
-#error Need to rewrite EntriesUpto to support such big words
-#endif /* >8 bytes */
-#endif /* 8 bytes */
-#endif /* 4 bytes */
-#endif /* 2 bytes */
-#endif /* 1 byte */
- }
- switch ( i ) { /* from 0 to 63 */
- case 0:
- return retval;
-#if LOG_WORD_SIZE >= 3 /* 1 byte per word, or more */
- case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8:
- return (retval + rgcBits[*bm & ((1 << i)-1)]);
-#if LOG_WORD_SIZE >= 4 /* at least 2 bytes */
- case 9: case 10: case 11: case 12: case 13: case 14: case 15: case 16:
- return (retval + rgcBits[*bm & 255] +
- rgcBits[(*bm >> 8) & ((1 << (i-8))-1)]);
-#if LOG_WORD_SIZE >= 5 /* at least 4 bytes */
- case 17: case 18: case 19: case 20: case 21: case 22: case 23: case 24:
- return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] +
- rgcBits[(*bm >> 16) & ((1 << (i-16))-1)]);
- case 25: case 26: case 27: case 28: case 29: case 30: case 31: case 32:
- return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] +
- rgcBits[(*bm >> 16) & 255] +
- rgcBits[(*bm >> 24) & ((1 << (i-24))-1)]);
-#if LOG_WORD_SIZE >= 6 /* 8 bytes! */
- case 33: case 34: case 35: case 36: case 37: case 38: case 39: case 40:
- return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] +
- rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] +
- rgcBits[(*bm >> 32) & ((1 << (i-32))-1)]);
- case 41: case 42: case 43: case 44: case 45: case 46: case 47: case 48:
- return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] +
- rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] +
- rgcBits[(*bm >> 32) & 255] +
- rgcBits[(*bm >> 40) & ((1 << (i-40))-1)]);
- case 49: case 50: case 51: case 52: case 53: case 54: case 55: case 56:
- return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] +
- rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] +
- rgcBits[(*bm >> 32) & 255] + rgcBits[(*bm >> 40) & 255] +
- rgcBits[(*bm >> 48) & ((1 << (i-48))-1)]);
- case 57: case 58: case 59: case 60: case 61: case 62: case 63: case 64:
- return (retval + rgcBits[*bm & 255] + rgcBits[(*bm >> 8) & 255] +
- rgcBits[(*bm >> 16) & 255] + rgcBits[(*bm >> 24) & 255] +
- rgcBits[(*bm >> 32) & 255] + rgcBits[(*bm >> 40) & 255] +
- rgcBits[(*bm >> 48) & 255] +
- rgcBits[(*bm >> 56) & ((1 << (i-56))-1)]);
-#endif /* 8 bytes */
-#endif /* 4 bytes */
-#endif /* 2 bytes */
-#endif /* 1 byte */
- }
- assert("" == "word size is too big in EntriesUpto()");
- return -1;
-}
-#define SPARSE_POS_TO_OFFSET(bm, i) ( EntriesUpto(&((bm)[0]), i) )
-#define SPARSE_BUCKET(bin, location) \
- ( (bin)[(location) >> LOG_LOW_BIN_SIZE].binSparse + \
- SPARSE_POS_TO_OFFSET((bin)[(location)>>LOG_LOW_BIN_SIZE].bmOccupied, \
- MOD2(location, LOG_LOW_BIN_SIZE)) )
-
-
-/*************************************************************************\
-| SparseAllocate() |
-| SparseFree() |
-| Allocates, sets-to-empty, and frees a sparse array. All you need |
-| to tell me is how many buckets you want. I return the number of |
-| buckets I actually allocated, setting the array as a parameter. |
-| Note that you have to set auxilliary parameters, like cOccupied. |
-\*************************************************************************/
-
-static ulong SparseAllocate(SparseBin **pbinSparse, ulong cBuckets)
-{
- int cGroups = SPARSE_GROUPS(cBuckets);
-
- *pbinSparse = (SparseBin *) HTscalloc(sizeof(**pbinSparse) * cGroups);
- return cGroups << LOG_LOW_BIN_SIZE;
-}
-
-static SparseBin *SparseFree(SparseBin *binSparse, ulong cBuckets)
-{
- ulong iGroup, cGroups = SPARSE_GROUPS(cBuckets);
-
- for ( iGroup = 0; iGroup < cGroups; iGroup++ )
- HTfree(binSparse[iGroup].binSparse, (sizeof(*binSparse[iGroup].binSparse)
- * binSparse[iGroup].cOccupied));
- HTfree(binSparse, sizeof(*binSparse) * cGroups);
- return NULL;
-}
-
-/*************************************************************************\
-| SparseIsEmpty() |
-| SparseFind() |
-| You give me a location (ie a number between 1 and t), and I |
-| return the bucket at that location, or NULL if the bucket is |
-| empty. It's OK to call Find() on an empty table. |
-\*************************************************************************/
-
-static int SparseIsEmpty(SparseBin *binSparse, ulong location)
-{
- return !TEST_BITMAP(binSparse[location>>LOG_LOW_BIN_SIZE].bmOccupied,
- MOD2(location, LOG_LOW_BIN_SIZE));
-}
-
-static SparseBucket *SparseFind(SparseBin *binSparse, ulong location)
-{
- if ( SparseIsEmpty(binSparse, location) )
- return NULL;
- return SPARSE_BUCKET(binSparse, location);
-}
-
-/*************************************************************************\
-| SparseInsert() |
-| You give me a location, and contents to put there, and I insert |
-| into that location and RETURN a pointer to the location. If |
-| bucket was already occupied, I write over the contents only if |
-| *pfOverwrite is 1. We set *pfOverwrite to 1 if there was someone |
-| there (whether or not we overwrote) and 0 else. |
-\*************************************************************************/
-
-static SparseBucket *SparseInsert(SparseBin *binSparse, SparseBucket *bckInsert,
- ulong location, int *pfOverwrite)
-{
- SparseBucket *bckPlace;
- HTOffset offset;
-
- bckPlace = SparseFind(binSparse, location);
- if ( bckPlace ) /* means we replace old contents */
- {
- if ( *pfOverwrite )
- *bckPlace = *bckInsert;
- *pfOverwrite = 1;
- return bckPlace;
- }
-
- binSparse += (location >> LOG_LOW_BIN_SIZE);
- offset = SPARSE_POS_TO_OFFSET(binSparse->bmOccupied,
- MOD2(location, LOG_LOW_BIN_SIZE));
- binSparse->binSparse = (SparseBucket *)
- HTsrealloc(binSparse->binSparse,
- sizeof(*binSparse->binSparse) * ++binSparse->cOccupied,
- sizeof(*binSparse->binSparse));
- memmove(binSparse->binSparse + offset+1,
- binSparse->binSparse + offset,
- (binSparse->cOccupied-1 - offset) * sizeof(*binSparse->binSparse));
- binSparse->binSparse[offset] = *bckInsert;
- SET_BITMAP(binSparse->bmOccupied, MOD2(location, LOG_LOW_BIN_SIZE));
- *pfOverwrite = 0;
- return binSparse->binSparse + offset;
-}
-
-/*************************************************************************\
-| SparseFirstBucket() |
-| SparseNextBucket() |
-| SparseCurrentBit() |
-| Iterate through the occupied buckets of a dense hashtable. You |
-| must, of course, have allocated space yourself for the iterator. |
-\*************************************************************************/
-
-static SparseBucket *SparseNextBucket(SparseIterator *iter)
-{
- if ( iter->posOffset != -1 && /* not called from FirstBucket()? */
- (++iter->posOffset < iter->binSparse[iter->posGroup].cOccupied) )
- return iter->binSparse[iter->posGroup].binSparse + iter->posOffset;
-
- iter->posOffset = 0; /* start the next group */
- for ( iter->posGroup++; iter->posGroup < SPARSE_GROUPS(iter->cBuckets);
- iter->posGroup++ )
- if ( iter->binSparse[iter->posGroup].cOccupied > 0 )
- return iter->binSparse[iter->posGroup].binSparse; /* + 0 */
- return NULL; /* all remaining groups were empty */
-}
-
-static SparseBucket *SparseFirstBucket(SparseIterator *iter,
- SparseBin *binSparse, ulong cBuckets)
-{
- iter->binSparse = binSparse; /* set it up for NextBucket() */
- iter->cBuckets = cBuckets;
- iter->posOffset = -1; /* when we advance, we're at 0 */
- iter->posGroup = -1;
- return SparseNextBucket(iter);
-}
-
-/*************************************************************************\
-| SparseWrite() |
-| SparseRead() |
-| These are routines for storing a sparse hashtable onto disk. We |
-| store the number of buckets and a bitmap indicating which buckets |
-| are allocated (occupied). The actual contents of the buckets |
-| must be stored separately. |
-\*************************************************************************/
-
-static void SparseWrite(FILE *fp, SparseBin *binSparse, ulong cBuckets)
-{
- ulong i, j;
-
- WRITE_UL(fp, cBuckets);
- for ( i = 0; i < SPARSE_GROUPS(cBuckets); i++ )
- for ( j = 0; j < (1<rgBuckets, cBuckets);
-}
-
-static ulong DenseAllocate(DenseBin **pbin, ulong cBuckets)
-{
- *pbin = (DenseBin *) HTsmalloc(sizeof(*pbin));
- (*pbin)->rgBuckets = (DenseBucket *) HTsmalloc(sizeof(*(*pbin)->rgBuckets)
- * cBuckets);
- DenseClear(*pbin, cBuckets);
- return cBuckets;
-}
-
-static DenseBin *DenseFree(DenseBin *bin, ulong cBuckets)
-{
- HTfree(bin->rgBuckets, sizeof(*bin->rgBuckets) * cBuckets);
- HTfree(bin, sizeof(*bin));
- return NULL;
-}
-
-static int DenseIsEmpty(DenseBin *bin, ulong location)
-{
- return DENSE_IS_EMPTY(bin->rgBuckets, location);
-}
-
-static DenseBucket *DenseFind(DenseBin *bin, ulong location)
-{
- if ( DenseIsEmpty(bin, location) )
- return NULL;
- return bin->rgBuckets + location;
-}
-
-static DenseBucket *DenseInsert(DenseBin *bin, DenseBucket *bckInsert,
- ulong location, int *pfOverwrite)
-{
- DenseBucket *bckPlace;
-
- bckPlace = DenseFind(bin, location);
- if ( bckPlace ) /* means something is already there */
- {
- if ( *pfOverwrite )
- *bckPlace = *bckInsert;
- *pfOverwrite = 1; /* set to 1 to indicate someone was there */
- return bckPlace;
- }
- else
- {
- bin->rgBuckets[location] = *bckInsert;
- *pfOverwrite = 0;
- return bin->rgBuckets + location;
- }
-}
-
-static DenseBucket *DenseNextBucket(DenseIterator *iter)
-{
- for ( iter->pos++; iter->pos < iter->cBuckets; iter->pos++ )
- if ( !DenseIsEmpty(iter->bin, iter->pos) )
- return iter->bin->rgBuckets + iter->pos;
- return NULL; /* all remaining groups were empty */
-}
-
-static DenseBucket *DenseFirstBucket(DenseIterator *iter,
- DenseBin *bin, ulong cBuckets)
-{
- iter->bin = bin; /* set it up for NextBucket() */
- iter->cBuckets = cBuckets;
- iter->pos = -1; /* thus the next bucket will be 0 */
- return DenseNextBucket(iter);
-}
-
-static void DenseWrite(FILE *fp, DenseBin *bin, ulong cBuckets)
-{
- ulong pos = 0, bit, bm;
-
- WRITE_UL(fp, cBuckets);
- while ( pos < cBuckets )
- {
- bm = 0;
- for ( bit = 0; bit < 8*sizeof(ulong); bit++ )
- {
- if ( !DenseIsEmpty(bin, pos) )
- SET_BITMAP(&bm, bit); /* in fks-hash.h */
- if ( ++pos == cBuckets )
- break;
- }
- WRITE_UL(fp, bm);
- }
-}
-
-static ulong DenseRead(FILE *fp, DenseBin **pbin)
-{
- ulong pos = 0, bit, bm, cBuckets;
-
- READ_UL(fp, cBuckets);
- cBuckets = DenseAllocate(pbin, cBuckets);
- while ( pos < cBuckets )
- {
- READ_UL(fp, bm);
- for ( bit = 0; bit < 8*sizeof(ulong); bit++ )
- {
- if ( TEST_BITMAP(&bm, bit) ) /* in fks-hash.h */
- DENSE_SET_OCCUPIED((*pbin)->rgBuckets, pos);
- else
- DENSE_SET_EMPTY((*pbin)->rgBuckets, pos);
- if ( ++pos == cBuckets )
- break;
- }
- }
- return cBuckets;
-}
-
-static ulong DenseMemory(ulong cBuckets, ulong cOccupied)
-{
- return cBuckets * sizeof(DenseBucket);
-}
-
-
-/* ======================================================================== */
-/* HASHING ROUTINES */
-/* ---------------------- */
-
-/* Implements a simple quadratic hashing scheme. We have a single hash
- * table of size t and a single hash function h(x). When inserting an
- * item, first we try h(x) % t. If it's occupied, we try h(x) +
- * i*(i-1)/2 % t for increasing values of i until we hit a not-occupied
- * space. To make this dynamic, we double the size of the hash table as
- * soon as more than half the cells are occupied. When deleting, we can
- * choose to shrink the hashtable when less than a quarter of the
- * cells are occupied, or we can choose never to shrink the hashtable.
- * For lookup, we check h(x) + i*(i-1)/2 % t (starting with i=0) until
- * we get a match or we hit an empty space. Note that as a result,
- * we can't make a cell empty on deletion, or lookups may end prematurely.
- * Instead we mark the cell as "deleted." We thus steal the value
- * DELETED as a possible "data" value. As long as data are pointers,
- * that's ok.
- * The hash increment we use, i(i-1)/2, is not the standard quadratic
- * hash increment, which is i^2. i(i-1)/2 covers the entire bucket space
- * when the hashtable size is a power of two, as it is for us. In fact,
- * the first n probes cover n distinct buckets; then it repeats. This
- * guarantees insertion will always succeed.
- * If you linear hashing, set JUMP in chash.h. You can also change
- * various other parameters there.
- */
-
-/*************************************************************************\
-| Hash() |
-| The hash function I use is due to Bob Jenkins (see |
-| http://burtleburtle.net/bob/hash/evahash.html |
-| According to http://burtleburtle.net/bob/c/lookup2.c, |
-| his implementation is public domain.) |
-| It takes 36 instructions, in 18 cycles if you're lucky. |
-| hashing depends on the fact the hashtable size is always a |
-| power of 2. cBuckets is probably ht->cBuckets. |
-\*************************************************************************/
-
-#if LOG_WORD_SIZE == 5 /* 32 bit words */
-
-#define mix(a,b,c) \
-{ \
- a -= b; a -= c; a ^= (c>>13); \
- b -= c; b -= a; b ^= (a<<8); \
- c -= a; c -= b; c ^= (b>>13); \
- a -= b; a -= c; a ^= (c>>12); \
- b -= c; b -= a; b ^= (a<<16); \
- c -= a; c -= b; c ^= (b>>5); \
- a -= b; a -= c; a ^= (c>>3); \
- b -= c; b -= a; b ^= (a<<10); \
- c -= a; c -= b; c ^= (b>>15); \
-}
-#ifdef WORD_HASH /* play with this on little-endian machines */
-#define WORD_AT(ptr) ( *(ulong *)(ptr) )
-#else
-#define WORD_AT(ptr) ( (ptr)[0] + ((ulong)(ptr)[1]<<8) + \
- ((ulong)(ptr)[2]<<16) + ((ulong)(ptr)[3]<<24) )
-#endif
-
-#elif LOG_WORD_SIZE == 6 /* 64 bit words */
-
-#define mix(a,b,c) \
-{ \
- a -= b; a -= c; a ^= (c>>43); \
- b -= c; b -= a; b ^= (a<<9); \
- c -= a; c -= b; c ^= (b>>8); \
- a -= b; a -= c; a ^= (c>>38); \
- b -= c; b -= a; b ^= (a<<23); \
- c -= a; c -= b; c ^= (b>>5); \
- a -= b; a -= c; a ^= (c>>35); \
- b -= c; b -= a; b ^= (a<<49); \
- c -= a; c -= b; c ^= (b>>11); \
- a -= b; a -= c; a ^= (c>>12); \
- b -= c; b -= a; b ^= (a<<18); \
- c -= a; c -= b; c ^= (b>>22); \
-}
-#ifdef WORD_HASH /* alpha is little-endian, btw */
-#define WORD_AT(ptr) ( *(ulong *)(ptr) )
-#else
-#define WORD_AT(ptr) ( (ptr)[0] + ((ulong)(ptr)[1]<<8) + \
- ((ulong)(ptr)[2]<<16) + ((ulong)(ptr)[3]<<24) + \
- ((ulong)(ptr)[4]<<32) + ((ulong)(ptr)[5]<<40) + \
- ((ulong)(ptr)[6]<<48) + ((ulong)(ptr)[7]<<56) )
-#endif
-
-#else /* neither 32 or 64 bit words */
-#error This hash function can only hash 32 or 64 bit words. Sorry.
-#endif
-
-static ulong Hash(HashTable *ht, char *key, ulong cBuckets)
-{
- ulong a, b, c, cchKey, cchKeyOrig;
-
- cchKeyOrig = ht->cchKey == NULL_TERMINATED ? strlen(key) : ht->cchKey;
- a = b = c = 0x9e3779b9; /* the golden ratio; an arbitrary value */
-
- for ( cchKey = cchKeyOrig; cchKey >= 3 * sizeof(ulong);
- cchKey -= 3 * sizeof(ulong), key += 3 * sizeof(ulong) )
- {
- a += WORD_AT(key);
- b += WORD_AT(key + sizeof(ulong));
- c += WORD_AT(key + sizeof(ulong)*2);
- mix(a,b,c);
- }
-
- c += cchKeyOrig;
- switch ( cchKey ) { /* deal with rest. Cases fall through */
-#if LOG_WORD_SIZE == 5
- case 11: c += (ulong)key[10]<<24;
- case 10: c += (ulong)key[9]<<16;
- case 9 : c += (ulong)key[8]<<8;
- /* the first byte of c is reserved for the length */
- case 8 : b += WORD_AT(key+4); a+= WORD_AT(key); break;
- case 7 : b += (ulong)key[6]<<16;
- case 6 : b += (ulong)key[5]<<8;
- case 5 : b += key[4];
- case 4 : a += WORD_AT(key); break;
- case 3 : a += (ulong)key[2]<<16;
- case 2 : a += (ulong)key[1]<<8;
- case 1 : a += key[0];
- /* case 0 : nothing left to add */
-#elif LOG_WORD_SIZE == 6
- case 23: c += (ulong)key[22]<<56;
- case 22: c += (ulong)key[21]<<48;
- case 21: c += (ulong)key[20]<<40;
- case 20: c += (ulong)key[19]<<32;
- case 19: c += (ulong)key[18]<<24;
- case 18: c += (ulong)key[17]<<16;
- case 17: c += (ulong)key[16]<<8;
- /* the first byte of c is reserved for the length */
- case 16: b += WORD_AT(key+8); a+= WORD_AT(key); break;
- case 15: b += (ulong)key[14]<<48;
- case 14: b += (ulong)key[13]<<40;
- case 13: b += (ulong)key[12]<<32;
- case 12: b += (ulong)key[11]<<24;
- case 11: b += (ulong)key[10]<<16;
- case 10: b += (ulong)key[ 9]<<8;
- case 9: b += (ulong)key[ 8];
- case 8: a += WORD_AT(key); break;
- case 7: a += (ulong)key[ 6]<<48;
- case 6: a += (ulong)key[ 5]<<40;
- case 5: a += (ulong)key[ 4]<<32;
- case 4: a += (ulong)key[ 3]<<24;
- case 3: a += (ulong)key[ 2]<<16;
- case 2: a += (ulong)key[ 1]<<8;
- case 1: a += (ulong)key[ 0];
- /* case 0: nothing left to add */
-#endif
- }
- mix(a,b,c);
- return c & (cBuckets-1);
-}
-
-
-/*************************************************************************\
-| Rehash() |
-| You give me a hashtable, a new size, and a bucket to follow, and |
-| I resize the hashtable's bin to be the new size, rehashing |
-| everything in it. I keep particular track of the bucket you pass |
-| in, and RETURN a pointer to where the item in the bucket got to. |
-| (If you pass in NULL, I return an arbitrary pointer.) |
-\*************************************************************************/
-
-static HTItem *Rehash(HashTable *ht, ulong cNewBuckets, HTItem *bckWatch)
-{
- Table *tableNew;
- ulong iBucketFirst;
- HTItem *bck, *bckNew = NULL;
- ulong offset; /* the i in h(x) + i*(i-1)/2 */
- int fOverwrite = 0; /* not an issue: there can be no collisions */
-
- assert( ht->table );
- cNewBuckets = Table(Allocate)(&tableNew, cNewBuckets);
- /* Since we RETURN the new position of bckWatch, we want *
- * to make sure it doesn't get moved due to some table *
- * rehashing that comes after it's inserted. Thus, we *
- * have to put it in last. This makes the loop weird. */
- for ( bck = HashFirstBucket(ht); ; bck = HashNextBucket(ht) )
- {
- if ( bck == NULL ) /* we're done iterating, so look at bckWatch */
- {
- bck = bckWatch;
- if ( bck == NULL ) /* I guess bckWatch wasn't specified */
- break;
- }
- else if ( bck == bckWatch )
- continue; /* ignore if we see it during the iteration */
-
- offset = 0; /* a new i for a new bucket */
- for ( iBucketFirst = Hash(ht, KEY_PTR(ht, bck->key), cNewBuckets);
- !Table(IsEmpty)(tableNew, iBucketFirst);
- iBucketFirst = (iBucketFirst + JUMP(KEY_PTR(ht,bck->key), offset))
- & (cNewBuckets-1) )
- ;
- bckNew = Table(Insert)(tableNew, bck, iBucketFirst, &fOverwrite);
- if ( bck == bckWatch ) /* we're done with the last thing to do */
- break;
- }
- Table(Free)(ht->table, ht->cBuckets);
- ht->table = tableNew;
- ht->cBuckets = cNewBuckets;
- ht->cDeletedItems = 0;
- return bckNew; /* new position of bckWatch, which was inserted last */
-}
-
-/*************************************************************************\
-| Find() |
-| Does the quadratic searching stuff. RETURNS NULL if we don't |
-| find an object with the given key, and a pointer to the Item |
-| holding the key, if we do. Also sets posLastFind. If piEmpty is |
-| non-NULL, we set it to the first open bucket we pass; helpful for |
-| doing a later insert if the search fails, for instance. |
-\*************************************************************************/
-
-static HTItem *Find(HashTable *ht, ulong key, ulong *piEmpty)
-{
- ulong iBucketFirst;
- HTItem *item;
- ulong offset = 0; /* the i in h(x) + i*(i-1)/2 */
- int fFoundEmpty = 0; /* set when we pass over an empty bucket */
-
- ht->posLastFind = NULL; /* set up for failure: a new find starts */
- if ( ht->table == NULL ) /* empty hash table: find is bound to fail */
- return NULL;
-
- iBucketFirst = Hash(ht, KEY_PTR(ht, key), ht->cBuckets);
- while ( 1 ) /* now try all i > 0 */
- {
- item = Table(Find)(ht->table, iBucketFirst);
- if ( item == NULL ) /* it's not in the table */
- {
- if ( piEmpty && !fFoundEmpty ) *piEmpty = iBucketFirst;
- return NULL;
- }
- else
- {
- if ( IS_BCK_DELETED(item) ) /* always 0 ifdef INSERT_ONLY */
- {
- if ( piEmpty && !fFoundEmpty )
- {
- *piEmpty = iBucketFirst;
- fFoundEmpty = 1;
- }
- } else
- if ( !KEY_CMP(ht, key, item->key) ) /* must be occupied */
- {
- ht->posLastFind = item;
- return item; /* we found it! */
- }
- }
- iBucketFirst = ((iBucketFirst + JUMP(KEY_PTR(ht, key), offset))
- & (ht->cBuckets-1));
- }
-}
-
-/*************************************************************************\
-| Insert() |
-| If an item with the key already exists in the hashtable, RETURNS |
-| a pointer to the item (replacing its data if fOverwrite is 1). |
-| If not, we find the first place-to-insert (which Find() is nice |
-| enough to set for us) and insert the item there, RETURNing a |
-| pointer to the item. We might grow the hashtable if it's getting |
-| full. Note we include buckets holding DELETED when determining |
-| fullness, because they slow down searching. |
-\*************************************************************************/
-
-static ulong NextPow2(ulong x) /* returns next power of 2 > x, or 2^31 */
-{
- if ( ((x << 1) >> 1) != x ) /* next power of 2 overflows */
- x >>= 1; /* so we return highest power of 2 we can */
- while ( (x & (x-1)) != 0 ) /* blacks out all but the top bit */
- x &= (x-1);
- return x << 1; /* makes it the *next* power of 2 */
-}
-
-static HTItem *Insert(HashTable *ht, ulong key, ulong data, int fOverwrite)
-{
- HTItem *item, bckInsert;
- ulong iEmpty; /* first empty bucket key probes */
-
- if ( ht->table == NULL ) /* empty hash table: find is bound to fail */
- return NULL;
- item = Find(ht, key, &iEmpty);
- ht->posLastFind = NULL; /* last operation is insert, not find */
- if ( item )
- {
- if ( fOverwrite )
- item->data = data; /* key already matches */
- return item;
- }
-
- COPY_KEY(ht, bckInsert.key, key); /* make our own copy of the key */
- bckInsert.data = data; /* oh, and the data too */
- item = Table(Insert)(ht->table, &bckInsert, iEmpty, &fOverwrite);
- if ( fOverwrite ) /* we overwrote a deleted bucket */
- ht->cDeletedItems--;
- ht->cItems++; /* insert couldn't have overwritten */
- if ( ht->cDeltaGoalSize > 0 ) /* closer to our goal size */
- ht->cDeltaGoalSize--;
- if ( ht->cItems + ht->cDeletedItems >= ht->cBuckets * OCCUPANCY_PCT
- || ht->cDeltaGoalSize < 0 ) /* we must've overestimated # of deletes */
- item = Rehash(ht,
- NextPow2((ulong)(((ht->cDeltaGoalSize > 0 ?
- ht->cDeltaGoalSize : 0)
- + ht->cItems) / OCCUPANCY_PCT)),
- item);
- return item;
-}
-
-/*************************************************************************\
-| Delete() |
-| Removes the item from the hashtable, and if fShrink is 1, will |
-| shrink the hashtable if it's too small (ie even after halving, |
-| the ht would be less than half full, though in order to avoid |
-| oscillating table size, we insist that after halving the ht would |
-| be less than 40% full). RETURNS 1 if the item was found, 0 else. |
-| If fLastFindSet is true, then this function is basically |
-| DeleteLastFind. |
-\*************************************************************************/
-
-static int Delete(HashTable *ht, ulong key, int fShrink, int fLastFindSet)
-{
- if ( !fLastFindSet && !Find(ht, key, NULL) )
- return 0;
- SET_BCK_DELETED(ht, ht->posLastFind); /* find set this, how nice */
- ht->cItems--;
- ht->cDeletedItems++;
- if ( ht->cDeltaGoalSize < 0 ) /* heading towards our goal of deletion */
- ht->cDeltaGoalSize++;
-
- if ( fShrink && ht->cItems < ht->cBuckets * OCCUPANCY_PCT*0.4
- && ht->cDeltaGoalSize >= 0 /* wait until we're done deleting */
- && (ht->cBuckets >> 1) >= MIN_HASH_SIZE ) /* shrink */
- Rehash(ht,
- NextPow2((ulong)((ht->cItems+ht->cDeltaGoalSize)/OCCUPANCY_PCT)),
- NULL);
- ht->posLastFind = NULL; /* last operation is delete, not find */
- return 1;
-}
-
-
-/* ======================================================================== */
-/* USER-VISIBLE API */
-/* ---------------------- */
-
-/*************************************************************************\
-| AllocateHashTable() |
-| ClearHashTable() |
-| FreeHashTable() |
-| Allocate() allocates a hash table and sets up size parameters. |
-| Free() frees it. Clear() deletes all the items from the hash |
-| table, but frees not. |
-| cchKey is < 0 if the keys you send me are meant to be pointers |
-| to \0-terminated strings. Then -cchKey is the maximum key size. |
-| If cchKey < one word (ulong), the keys you send me are the keys |
-| themselves; else the keys you send me are pointers to the data. |
-| If fSaveKeys is 1, we copy any keys given to us to insert. We |
-| also free these keys when freeing the hash table. If it's 0, the |
-| user is responsible for key space management. |
-| AllocateHashTable() RETURNS a hash table; the others TAKE one. |
-\*************************************************************************/
-
-HashTable *AllocateHashTable(int cchKey, int fSaveKeys)
-{
- HashTable *ht;
-
- ht = (HashTable *) HTsmalloc(sizeof(*ht)); /* set everything to 0 */
- ht->cBuckets = Table(Allocate)(&ht->table, MIN_HASH_SIZE);
- ht->cchKey = cchKey <= 0 ? NULL_TERMINATED : cchKey;
- ht->cItems = 0;
- ht->cDeletedItems = 0;
- ht->fSaveKeys = fSaveKeys;
- ht->cDeltaGoalSize = 0;
- ht->iter = HTsmalloc( sizeof(TableIterator) );
-
- ht->fpData = NULL; /* set by HashLoad, maybe */
- ht->bckData.data = (ulong) NULL; /* this must be done */
- HTSetupKeyTrunc(); /* in util.c */
- return ht;
-}
-
-void ClearHashTable(HashTable *ht)
-{
- HTItem *bck;
-
- if ( STORES_PTR(ht) && ht->fSaveKeys ) /* need to free keys */
- for ( bck = HashFirstBucket(ht); bck; bck = HashNextBucket(ht) )
- {
- FREE_KEY(ht, bck->key);
- if ( ht->fSaveKeys == 2 ) /* this means key stored in one block */
- break; /* ...so only free once */
- }
- Table(Free)(ht->table, ht->cBuckets);
- ht->cBuckets = Table(Allocate)(&ht->table, MIN_HASH_SIZE);
-
- ht->cItems = 0;
- ht->cDeletedItems = 0;
- ht->cDeltaGoalSize = 0;
- ht->posLastFind = NULL;
- ht->fpData = NULL; /* no longer HashLoading */
- if ( ht->bckData.data ) free( (char *)(ht)->bckData.data);
- ht->bckData.data = (ulong) NULL;
-}
-
-void FreeHashTable(HashTable *ht)
-{
- ClearHashTable(ht);
- if ( ht->iter ) HTfree(ht->iter, sizeof(TableIterator));
- if ( ht->table ) Table(Free)(ht->table, ht->cBuckets);
- free(ht);
-}
-
-/*************************************************************************\
-| HashFind() |
-| HashFindLast() |
-| HashFind(): looks in h(x) + i(i-1)/2 % t as i goes up from 0 |
-| until we either find the key or hit an empty bucket. RETURNS a |
-| pointer to the item in the hit bucket, if we find it, else |
-| RETURNS NULL. |
-| HashFindLast() returns the item returned by the last |
-| HashFind(), which may be NULL if the last HashFind() failed. |
-| LOAD_AND_RETURN reads the data from off disk, if necessary. |
-\*************************************************************************/
-
-HTItem *HashFind(HashTable *ht, ulong key)
-{
- LOAD_AND_RETURN(ht, Find(ht, KEY_TRUNC(ht, key), NULL));
-}
-
-HTItem *HashFindLast(HashTable *ht)
-{
- LOAD_AND_RETURN(ht, ht->posLastFind);
-}
-
-/*************************************************************************\
-| HashFindOrInsert() |
-| HashFindOrInsertItem() |
-| HashInsert() |
-| HashInsertItem() |
-| HashDelete() |
-| HashDeleteLast() |
-| Pretty obvious what these guys do. Some take buckets (items), |
-| some take keys and data separately. All things RETURN the bucket |
-| (a pointer into the hashtable) if appropriate. |
-\*************************************************************************/
-
-HTItem *HashFindOrInsert(HashTable *ht, ulong key, ulong dataInsert)
-{
- /* This is equivalent to Insert without samekey-overwrite */
- return Insert(ht, KEY_TRUNC(ht, key), dataInsert, 0);
-}
-
-HTItem *HashFindOrInsertItem(HashTable *ht, HTItem *pItem)
-{
- return HashFindOrInsert(ht, pItem->key, pItem->data);
-}
-
-HTItem *HashInsert(HashTable *ht, ulong key, ulong data)
-{
- return Insert(ht, KEY_TRUNC(ht, key), data, SAMEKEY_OVERWRITE);
-}
-
-HTItem *HashInsertItem(HashTable *ht, HTItem *pItem)
-{
- return HashInsert(ht, pItem->key, pItem->data);
-}
-
-int HashDelete(HashTable *ht, ulong key)
-{
- return Delete(ht, KEY_TRUNC(ht, key), !FAST_DELETE, 0);
-}
-
-int HashDeleteLast(HashTable *ht)
-{
- if ( !ht->posLastFind ) /* last find failed */
- return 0;
- return Delete(ht, 0, !FAST_DELETE, 1); /* no need to specify a key */
-}
-
-/*************************************************************************\
-| HashFirstBucket() |
-| HashNextBucket() |
-| Iterates through the items in the hashtable by iterating through |
-| the table. Since we know about deleted buckets and loading data |
-| off disk, and the table doesn't, our job is to take care of these |
-| things. RETURNS a bucket, or NULL after the last bucket. |
-\*************************************************************************/
-
-HTItem *HashFirstBucket(HashTable *ht)
-{
- HTItem *retval;
-
- for ( retval = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets);
- retval; retval = Table(NextBucket)(ht->iter) )
- if ( !IS_BCK_DELETED(retval) )
- LOAD_AND_RETURN(ht, retval);
- return NULL;
-}
-
-HTItem *HashNextBucket(HashTable *ht)
-{
- HTItem *retval;
-
- while ( (retval=Table(NextBucket)(ht->iter)) )
- if ( !IS_BCK_DELETED(retval) )
- LOAD_AND_RETURN(ht, retval);
- return NULL;
-}
-
-/*************************************************************************\
-| HashSetDeltaGoalSize() |
-| If we're going to insert 100 items, set the delta goal size to |
-| 100 and we take that into account when inserting. Likewise, if |
-| we're going to delete 10 items, set it to -100 and we won't |
-| rehash until all 100 have been done. It's ok to be wrong, but |
-| it's efficient to be right. Returns the delta value. |
-\*************************************************************************/
-
-int HashSetDeltaGoalSize(HashTable *ht, int delta)
-{
- ht->cDeltaGoalSize = delta;
-#if FAST_DELETE == 1 || defined INSERT_ONLY
- if ( ht->cDeltaGoalSize < 0 ) /* for fast delete, we never */
- ht->cDeltaGoalSize = 0; /* ...rehash after deletion */
-#endif
- return ht->cDeltaGoalSize;
-}
-
-
-/*************************************************************************\
-| HashSave() |
-| HashLoad() |
-| HashLoadKeys() |
-| Routines for saving and loading the hashtable from disk. We can |
-| then use the hashtable in two ways: loading it back into memory |
-| (HashLoad()) or loading only the keys into memory, in which case |
-| the data for a given key is loaded off disk when the key is |
-| retrieved. The data is freed when something new is retrieved in |
-| its place, so this is not a "lazy-load" scheme. |
-| The key is saved automatically and restored upon load, but the |
-| user needs to specify a routine for reading and writing the data. |
-| fSaveKeys is of course set to 1 when you read in a hashtable. |
-| HashLoad RETURNS a newly allocated hashtable. |
-| DATA_WRITE() takes an fp and a char * (representing the data |
-| field), and must perform two separate tasks. If fp is NULL, |
-| return the number of bytes written. If not, writes the data to |
-| disk at the place the fp points to. |
-| DATA_READ() takes an fp and the number of bytes in the data |
-| field, and returns a char * which points to wherever you've |
-| written the data. Thus, you must allocate memory for the data. |
-| Both dataRead and dataWrite may be NULL if you just wish to |
-| store the data field directly, as an integer. |
-\*************************************************************************/
-
-void HashSave(FILE *fp, HashTable *ht, int (*dataWrite)(FILE *, char *))
-{
- long cchData, posStart;
- HTItem *bck;
-
- /* File format: magic number (4 bytes)
- : cchKey (one word)
- : cItems (one word)
- : cDeletedItems (one word)
- : table info (buckets and a bitmap)
- : cchAllKeys (one word)
- Then the keys, in a block. If cchKey is NULL_TERMINATED, the keys
- are null-terminated too, otherwise this takes up cchKey*cItems bytes.
- Note that keys are not written for DELETED buckets.
- Then the data:
- : EITHER DELETED (one word) to indicate it's a deleted bucket,
- : OR number of bytes for this (non-empty) bucket's data
- (one word). This is not stored if dataWrite == NULL
- since the size is known to be sizeof(ul). Plus:
- : the data for this bucket (variable length)
- All words are in network byte order. */
-
- fprintf(fp, "%s", MAGIC_KEY);
- WRITE_UL(fp, ht->cchKey); /* WRITE_UL, READ_UL, etc in fks-hash.h */
- WRITE_UL(fp, ht->cItems);
- WRITE_UL(fp, ht->cDeletedItems);
- Table(Write)(fp, ht->table, ht->cBuckets); /* writes cBuckets too */
-
- WRITE_UL(fp, 0); /* to be replaced with sizeof(key block) */
- posStart = ftell(fp);
- for ( bck = HashFirstBucket(ht); bck; bck = HashNextBucket(ht) )
- fwrite(KEY_PTR(ht, bck->key), 1,
- (ht->cchKey == NULL_TERMINATED ?
- strlen(KEY_PTR(ht, bck->key))+1 : ht->cchKey), fp);
- cchData = ftell(fp) - posStart;
- fseek(fp, posStart - sizeof(unsigned long), SEEK_SET);
- WRITE_UL(fp, cchData);
- fseek(fp, 0, SEEK_END); /* done with our sojourn at the header */
-
- /* Unlike HashFirstBucket, TableFirstBucket iters through deleted bcks */
- for ( bck = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets);
- bck; bck = Table(NextBucket)(ht->iter) )
- if ( dataWrite == NULL || IS_BCK_DELETED(bck) )
- WRITE_UL(fp, bck->data);
- else /* write cchData followed by the data */
- {
- WRITE_UL(fp, (*dataWrite)(NULL, (char *)bck->data));
- (*dataWrite)(fp, (char *)bck->data);
- }
-}
-
-static HashTable *HashDoLoad(FILE *fp, char * (*dataRead)(FILE *, int),
- HashTable *ht)
-{
- ulong cchKey;
- char szMagicKey[4], *rgchKeys;
- HTItem *bck;
-
- fread(szMagicKey, 1, 4, fp);
- if ( strncmp(szMagicKey, MAGIC_KEY, 4) )
- {
- fprintf(stderr, "ERROR: not a hash table (magic key is %4.4s, not %s)\n",
- szMagicKey, MAGIC_KEY);
- exit(3);
- }
- Table(Free)(ht->table, ht->cBuckets); /* allocated in AllocateHashTable */
-
- READ_UL(fp, ht->cchKey);
- READ_UL(fp, ht->cItems);
- READ_UL(fp, ht->cDeletedItems);
- ht->cBuckets = Table(Read)(fp, &ht->table); /* next is the table info */
-
- READ_UL(fp, cchKey);
- rgchKeys = (char *) HTsmalloc( cchKey ); /* stores all the keys */
- fread(rgchKeys, 1, cchKey, fp);
- /* We use the table iterator so we don't try to LOAD_AND_RETURN */
- for ( bck = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets);
- bck; bck = Table(NextBucket)(ht->iter) )
- {
- READ_UL(fp, bck->data); /* all we need if dataRead is NULL */
- if ( IS_BCK_DELETED(bck) ) /* always 0 if defined(INSERT_ONLY) */
- continue; /* this is why we read the data first */
- if ( dataRead != NULL ) /* if it's null, we're done */
- if ( !ht->fpData ) /* load data into memory */
- bck->data = (ulong)dataRead(fp, bck->data);
- else /* store location of data on disk */
- {
- fseek(fp, bck->data, SEEK_CUR); /* bck->data held size of data */
- bck->data = ftell(fp) - bck->data - sizeof(unsigned long);
- }
-
- if ( ht->cchKey == NULL_TERMINATED ) /* now read the key */
- {
- bck->key = (ulong) rgchKeys;
- rgchKeys = strchr(rgchKeys, '\0') + 1; /* read past the string */
- }
- else
- {
- if ( STORES_PTR(ht) ) /* small keys stored directly */
- bck->key = (ulong) rgchKeys;
- else
- memcpy(&bck->key, rgchKeys, ht->cchKey);
- rgchKeys += ht->cchKey;
- }
- }
- if ( !STORES_PTR(ht) ) /* keys are stored directly */
- HTfree(rgchKeys - cchKey, cchKey); /* we've advanced rgchK to end */
- return ht;
-}
-
-HashTable *HashLoad(FILE *fp, char * (*dataRead)(FILE *, int))
-{
- HashTable *ht;
- ht = AllocateHashTable(0, 2); /* cchKey set later, fSaveKey should be 2! */
- return HashDoLoad(fp, dataRead, ht);
-}
-
-HashTable *HashLoadKeys(FILE *fp, char * (*dataRead)(FILE *, int))
-{
- HashTable *ht;
-
- if ( dataRead == NULL )
- return HashLoad(fp, NULL); /* no reason not to load the data here */
- ht = AllocateHashTable(0, 2); /* cchKey set later, fSaveKey should be 2! */
- ht->fpData = fp; /* tells HashDoLoad() to only load keys */
- ht->dataRead = dataRead;
- return HashDoLoad(fp, dataRead, ht);
-}
-
-/*************************************************************************\
-| PrintHashTable() |
-| A debugging tool. Prints the entire contents of the hash table, |
-| like so: : key of the contents. Returns number of bytes |
-| allocated. If time is not -1, we print it as the time required |
-| for the hash. If iForm is 0, we just print the stats. If it's |
-| 1, we print the keys and data too, but the keys are printed as |
-| ulongs. If it's 2, we print the keys correctly (as long numbers |
-| or as strings). |
-\*************************************************************************/
-
-ulong PrintHashTable(HashTable *ht, double time, int iForm)
-{
- ulong cbData = 0, cbBin = 0, cItems = 0, cOccupied = 0;
- HTItem *item;
-
- printf("HASH TABLE.\n");
- if ( time > -1.0 )
- {
- printf("----------\n");
- printf("Time: %27.2f\n", time);
- }
-
- for ( item = Table(FirstBucket)(ht->iter, ht->table, ht->cBuckets);
- item; item = Table(NextBucket)(ht->iter) )
- {
- cOccupied++; /* this includes deleted buckets */
- if ( IS_BCK_DELETED(item) ) /* we don't need you for anything else */
- continue;
- cItems++; /* this is for a sanity check */
- if ( STORES_PTR(ht) )
- cbData += ht->cchKey == NULL_TERMINATED ?
- WORD_ROUND(strlen((char *)item->key)+1) : ht->cchKey;
- else
- cbBin -= sizeof(item->key), cbData += sizeof(item->key);
- cbBin -= sizeof(item->data), cbData += sizeof(item->data);
- if ( iForm != 0 ) /* we want the actual contents */
- {
- if ( iForm == 2 && ht->cchKey == NULL_TERMINATED )
- printf("%s/%lu\n", (char *)item->key, item->data);
- else if ( iForm == 2 && STORES_PTR(ht) )
- printf("%.*s/%lu\n",
- (int)ht->cchKey, (char *)item->key, item->data);
- else /* either key actually is a ulong, or iForm == 1 */
- printf("%lu/%lu\n", item->key, item->data);
- }
- }
- assert( cItems == ht->cItems ); /* sanity check */
- cbBin = Table(Memory)(ht->cBuckets, cOccupied);
-
- printf("----------\n");
- printf("%lu buckets (%lu bytes). %lu empty. %lu hold deleted items.\n"
- "%lu items (%lu bytes).\n"
- "%lu bytes total. %lu bytes (%2.1f%%) of this is ht overhead.\n",
- ht->cBuckets, cbBin, ht->cBuckets - cOccupied, cOccupied - ht->cItems,
- ht->cItems, cbData,
- cbData + cbBin, cbBin, cbBin*100.0/(cbBin+cbData));
-
- return cbData + cbBin;
-}
diff --git a/src/sparsehash-1.6/experimental/libchash.h b/src/sparsehash-1.6/experimental/libchash.h
deleted file mode 100644
index 0c0f70a..0000000
--- a/src/sparsehash-1.6/experimental/libchash.h
+++ /dev/null
@@ -1,252 +0,0 @@
-/* Copyright (c) 1998 - 2005, Google Inc.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * ---
- * Author: Craig Silverstein
- *
- * This library is intended to be used for in-memory hash tables,
- * though it provides rudimentary permanent-storage capabilities.
- * It attempts to be fast, portable, and small. The best algorithm
- * to fulfill these goals is an internal probing hashing algorithm,
- * as in Knuth, _Art of Computer Programming_, vol III. Unlike
- * chained (open) hashing, it doesn't require a pointer for every
- * item, yet it is still constant time lookup in practice.
- *
- * Also to save space, we let the contents (both data and key) that
- * you insert be a union: if the key/data is small, we store it
- * directly in the hashtable, otherwise we store a pointer to it.
- * To keep you from having to figure out which, use KEY_PTR and
- * PTR_KEY to convert between the arguments to these functions and
- * a pointer to the real data. For instance:
- * char key[] = "ab", *key2;
- * HTItem *bck; HashTable *ht;
- * HashInsert(ht, PTR_KEY(ht, key), 0);
- * bck = HashFind(ht, PTR_KEY(ht, "ab"));
- * key2 = KEY_PTR(ht, bck->key);
- *
- * There are a rich set of operations supported:
- * AllocateHashTable() -- Allocates a hashtable structure and
- * returns it.
- * cchKey: if it's a positive number, then each key is a
- * fixed-length record of that length. If it's 0,
- * the key is assumed to be a \0-terminated string.
- * fSaveKey: normally, you are responsible for allocating
- * space for the key. If this is 1, we make a
- * copy of the key for you.
- * ClearHashTable() -- Removes everything from a hashtable
- * FreeHashTable() -- Frees memory used by a hashtable
- *
- * HashFind() -- takes a key (use PTR_KEY) and returns the
- * HTItem containing that key, or NULL if the
- * key is not in the hashtable.
- * HashFindLast() -- returns the item found by last HashFind()
- * HashFindOrInsert() -- inserts the key/data pair if the key
- * is not already in the hashtable, or
- * returns the appropraite HTItem if it is.
- * HashFindOrInsertItem() -- takes key/data as an HTItem.
- * HashInsert() -- adds a key/data pair to the hashtable. What
- * it does if the key is already in the table
- * depends on the value of SAMEKEY_OVERWRITE.
- * HashInsertItem() -- takes key/data as an HTItem.
- * HashDelete() -- removes a key/data pair from the hashtable,
- * if it's there. RETURNS 1 if it was there,
- * 0 else.
- * If you use sparse tables and never delete, the full data
- * space is available. Otherwise we steal -2 (maybe -3),
- * so you can't have data fields with those values.
- * HashDeleteLast() -- deletes the item returned by the last Find().
- *
- * HashFirstBucket() -- used to iterate over the buckets in a
- * hashtable. DON'T INSERT OR DELETE WHILE
- * ITERATING! You can't nest iterations.
- * HashNextBucket() -- RETURNS NULL at the end of iterating.
- *
- * HashSetDeltaGoalSize() -- if you're going to insert 1000 items
- * at once, call this fn with arg 1000.
- * It grows the table more intelligently.
- *
- * HashSave() -- saves the hashtable to a file. It saves keys ok,
- * but it doesn't know how to interpret the data field,
- * so if the data field is a pointer to some complex
- * structure, you must send a function that takes a
- * file pointer and a pointer to the structure, and
- * write whatever you want to write. It should return
- * the number of bytes written. If the file is NULL,
- * it should just return the number of bytes it would
- * write, without writing anything.
- * If your data field is just an integer, not a
- * pointer, just send NULL for the function.
- * HashLoad() -- loads a hashtable. It needs a function that takes
- * a file and the size of the structure, and expects
- * you to read in the structure and return a pointer
- * to it. You must do memory allocation, etc. If
- * the data is just a number, send NULL.
- * HashLoadKeys() -- unlike HashLoad(), doesn't load the data off disk
- * until needed. This saves memory, but if you look
- * up the same key a lot, it does a disk access each
- * time.
- * You can't do Insert() or Delete() on hashtables that were loaded
- * from disk.
- */
-
-#include /* includes definition of "ulong", we hope */
-#define ulong u_long
-
-#define MAGIC_KEY "CHsh" /* when we save the file */
-
-#ifndef LOG_WORD_SIZE /* 5 for 32 bit words, 6 for 64 */
-#if defined (__LP64__) || defined (_LP64)
-#define LOG_WORD_SIZE 6 /* log_2(sizeof(ulong)) [in bits] */
-#else
-#define LOG_WORD_SIZE 5 /* log_2(sizeof(ulong)) [in bits] */
-#endif
-#endif
-
- /* The following gives a speed/time tradeoff: how many buckets are *
- * in each bin. 0 gives 32 buckets/bin, which is a good number. */
-#ifndef LOG_BM_WORDS
-#define LOG_BM_WORDS 0 /* each group has 2^L_B_W * 32 buckets */
-#endif
-
- /* The following are all parameters that affect performance. */
-#ifndef JUMP
-#define JUMP(key, offset) ( ++(offset) ) /* ( 1 ) for linear hashing */
-#endif
-#ifndef Table
-#define Table(x) Sparse##x /* Dense##x for dense tables */
-#endif
-#ifndef FAST_DELETE
-#define FAST_DELETE 0 /* if it's 1, we never shrink the ht */
-#endif
-#ifndef SAMEKEY_OVERWRITE
-#define SAMEKEY_OVERWRITE 1 /* overwrite item with our key on insert? */
-#endif
-#ifndef OCCUPANCY_PCT
-#define OCCUPANCY_PCT 0.5 /* large PCT means smaller and slower */
-#endif
-#ifndef MIN_HASH_SIZE
-#define MIN_HASH_SIZE 512 /* ht size when first created */
-#endif
- /* When deleting a bucket, we can't just empty it (future hashes *
- * may fail); instead we set the data field to DELETED. Thus you *
- * should set DELETED to a data value you never use. Better yet, *
- * if you don't need to delete, define INSERT_ONLY. */
-#ifndef INSERT_ONLY
-#define DELETED -2UL
-#define IS_BCK_DELETED(bck) ( (bck) && (bck)->data == DELETED )
-#define SET_BCK_DELETED(ht, bck) do { (bck)->data = DELETED; \
- FREE_KEY(ht, (bck)->key); } while ( 0 )
-#else
-#define IS_BCK_DELETED(bck) 0
-#define SET_BCK_DELETED(ht, bck) \
- do { fprintf(stderr, "Deletion not supported for insert-only hashtable\n");\
- exit(2); } while ( 0 )
-#endif
-
- /* We need the following only for dense buckets (Dense##x above). *
- * If you need to, set this to a value you'll never use for data. */
-#define EMPTY -3UL /* steal more of the bck->data space */
-
-
- /* This is what an item is. Either can be cast to a pointer. */
-typedef struct {
- ulong data; /* 4 bytes for data: either a pointer or an integer */
- ulong key; /* 4 bytes for the key: either a pointer or an int */
-} HTItem;
-
-struct Table(Bin); /* defined in chash.c, I hope */
-struct Table(Iterator);
-typedef struct Table(Bin) Table; /* Expands to SparseBin, etc */
-typedef struct Table(Iterator) TableIterator;
-
- /* for STORES_PTR to work ok, cchKey MUST BE DEFINED 1st, cItems 2nd! */
-typedef struct HashTable {
- ulong cchKey; /* the length of the key, or if it's \0 terminated */
- ulong cItems; /* number of items currently in the hashtable */
- ulong cDeletedItems; /* # of buckets holding DELETE in the hashtable */
- ulong cBuckets; /* size of the table */
- Table *table; /* The actual contents of the hashtable */
- int fSaveKeys; /* 1 if we copy keys locally; 2 if keys in one block */
- int cDeltaGoalSize; /* # of coming inserts (or deletes, if <0) we expect */
- HTItem *posLastFind; /* position of last Find() command */
- TableIterator *iter; /* used in First/NextBucket */
-
- FILE *fpData; /* if non-NULL, what item->data points into */
- char * (*dataRead)(FILE *, int); /* how to load data from disk */
- HTItem bckData; /* holds data after being loaded from disk */
-} HashTable;
-
- /* Small keys are stored and passed directly, but large keys are
- * stored and passed as pointers. To make it easier to remember
- * what to pass, we provide two functions:
- * PTR_KEY: give it a pointer to your data, and it returns
- * something appropriate to send to Hash() functions or
- * be stored in a data field.
- * KEY_PTR: give it something returned by a Hash() routine, and
- * it returns a (char *) pointer to the actual data.
- */
-#define HashKeySize(ht) ( ((ulong *)(ht))[0] ) /* this is how we inline */
-#define HashSize(ht) ( ((ulong *)(ht))[1] ) /* ...a la C++ :-) */
-
-#define STORES_PTR(ht) ( HashKeySize(ht) == 0 || \
- HashKeySize(ht) > sizeof(ulong) )
-#define KEY_PTR(ht, key) ( STORES_PTR(ht) ? (char *)(key) : (char *)&(key) )
-#ifdef DONT_HAVE_TO_WORRY_ABOUT_BUS_ERRORS
-#define PTR_KEY(ht, ptr) ( STORES_PTR(ht) ? (ulong)(ptr) : *(ulong *)(ptr) )
-#else
-#define PTR_KEY(ht, ptr) ( STORES_PTR(ht) ? (ulong)(ptr) : HTcopy((char *)ptr))
-#endif
-
-
- /* Function prototypes */
-unsigned long HTcopy(char *pul); /* for PTR_KEY, not for users */
-
-struct HashTable *AllocateHashTable(int cchKey, int fSaveKeys);
-void ClearHashTable(struct HashTable *ht);
-void FreeHashTable(struct HashTable *ht);
-
-HTItem *HashFind(struct HashTable *ht, ulong key);
-HTItem *HashFindLast(struct HashTable *ht);
-HTItem *HashFindOrInsert(struct HashTable *ht, ulong key, ulong dataInsert);
-HTItem *HashFindOrInsertItem(struct HashTable *ht, HTItem *pItem);
-
-HTItem *HashInsert(struct HashTable *ht, ulong key, ulong data);
-HTItem *HashInsertItem(struct HashTable *ht, HTItem *pItem);
-
-int HashDelete(struct HashTable *ht, ulong key);
-int HashDeleteLast(struct HashTable *ht);
-
-HTItem *HashFirstBucket(struct HashTable *ht);
-HTItem *HashNextBucket(struct HashTable *ht);
-
-int HashSetDeltaGoalSize(struct HashTable *ht, int delta);
-
-void HashSave(FILE *fp, struct HashTable *ht, int (*write)(FILE *, char *));
-struct HashTable *HashLoad(FILE *fp, char * (*read)(FILE *, int));
-struct HashTable *HashLoadKeys(FILE *fp, char * (*read)(FILE *, int));
diff --git a/src/sparsehash-1.6/google-sparsehash.sln b/src/sparsehash-1.6/google-sparsehash.sln
deleted file mode 100755
index 6148fb5..0000000
--- a/src/sparsehash-1.6/google-sparsehash.sln
+++ /dev/null
@@ -1,47 +0,0 @@
-Microsoft Visual Studio Solution File, Format Version 8.00
-Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "type_traits_unittest", "vsprojects\type_traits_unittest\type_traits_unittest.vcproj", "{008CCFED-7D7B-46F8-8E13-03837A2258B3}"
- ProjectSection(ProjectDependencies) = postProject
- EndProjectSection
-EndProject
-Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "sparsetable_unittest", "vsprojects\sparsetable_unittest\sparsetable_unittest.vcproj", "{E420867B-8BFA-4739-99EC-E008AB762FF9}"
- ProjectSection(ProjectDependencies) = postProject
- EndProjectSection
-EndProject
-Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "hashtable_unittest", "vsprojects\hashtable_unittest\hashtable_unittest.vcproj", "{FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}"
- ProjectSection(ProjectDependencies) = postProject
- EndProjectSection
-EndProject
-Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "time_hash_map", "vsprojects\time_hash_map\time_hash_map.vcproj", "{A74E5DB8-5295-487A-AB1D-23859F536F45}"
- ProjectSection(ProjectDependencies) = postProject
- EndProjectSection
-EndProject
-Global
- GlobalSection(SolutionConfiguration) = preSolution
- Debug = Debug
- Release = Release
- EndGlobalSection
- GlobalSection(ProjectDependencies) = postSolution
- EndGlobalSection
- GlobalSection(ProjectConfiguration) = postSolution
- {008CCFED-7D7B-46F8-8E13-03837A2258B3}.Debug.ActiveCfg = Debug|Win32
- {008CCFED-7D7B-46F8-8E13-03837A2258B3}.Debug.Build.0 = Debug|Win32
- {008CCFED-7D7B-46F8-8E13-03837A2258B3}.Release.ActiveCfg = Release|Win32
- {008CCFED-7D7B-46F8-8E13-03837A2258B3}.Release.Build.0 = Release|Win32
- {E420867B-8BFA-4739-99EC-E008AB762FF9}.Debug.ActiveCfg = Debug|Win32
- {E420867B-8BFA-4739-99EC-E008AB762FF9}.Debug.Build.0 = Debug|Win32
- {E420867B-8BFA-4739-99EC-E008AB762FF9}.Release.ActiveCfg = Release|Win32
- {E420867B-8BFA-4739-99EC-E008AB762FF9}.Release.Build.0 = Release|Win32
- {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}.Debug.ActiveCfg = Debug|Win32
- {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}.Debug.Build.0 = Debug|Win32
- {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}.Release.ActiveCfg = Release|Win32
- {FCDB3718-F01C-4DE4-B9F5-E10F2C5C0535}.Release.Build.0 = Release|Win32
- {A74E5DB8-5295-487A-AB1D-23859F536F45}.Debug.ActiveCfg = Debug|Win32
- {A74E5DB8-5295-487A-AB1D-23859F536F45}.Debug.Build.0 = Debug|Win32
- {A74E5DB8-5295-487A-AB1D-23859F536F45}.Release.ActiveCfg = Release|Win32
- {A74E5DB8-5295-487A-AB1D-23859F536F45}.Release.Build.0 = Release|Win32
- EndGlobalSection
- GlobalSection(ExtensibilityGlobals) = postSolution
- EndGlobalSection
- GlobalSection(ExtensibilityAddIns) = postSolution
- EndGlobalSection
-EndGlobal
diff --git a/src/sparsehash-1.6/hashtable_unittest b/src/sparsehash-1.6/hashtable_unittest
deleted file mode 100755
index d4ce1fc1fb12c4e1d8dc7ea66bf95fb5a801d13d..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 4575403
zcmd?S3w%_?`9D0Hgo}zbASxlL@F+#>MxGn
zwf!l$k7Aae`n=_aE`#bgzoxqE{F;jMW@Y1|d{|@Mur&4`2YX6aj9=Cdn9BQ7?os-b
zTB5|cBEAf9AC#=ZSR9LZ=eZL&jEmwjIWG21%H(}c8IZ}2#{B~*e+=S8gcE_|`qnAz
z1YFZE9-#^8zK92)zJ7>@ARLd-8v*F4N1%G^34a*ldcr^It*7zEbM@0u37)GL&(%{~
zKXr7}E!sGabs(na4kF&Ub{ww4`yGXcss0qi&3H5oaX$c5g6m!c7=Hgm-hRYmk^Vni
zQ$O!PypL|;Hw@RAsACMSPeJ$qVK0LEE5>tPBxoEj#dRCX55x5WTpy2c1JZ9I{xiaF
zk$(+B62d0rtG{=}#g`%m*|0gdPdHhK=a!4IDS%@O@(v;tAv}wFvk}(entn8{1CXA9
zK);(1Mj>2@bh^m<7?!$
z@nGb+as4-ursyNI@gQOvyy1xRQ067XqY>2KOk6l|e-!Gcd3zhugT&3>h?vGN5BDxZ
z`UPD7i1=;9zeb2dKK%+%_x*@#L^>Jqo4Eg9UQ3gxipJ5a~M*GI9NH#OY}Jb=(_=>+2DJjhKFqjqqz+7bC1iNJSmQcUlmCitsyJKZEdllzRi|TM_b*z8mR_aIOEnibMkP
zuEM=saqlO@pCGONl0^OnH3RWG2#+E-aW5P3KM-!jGqngFT30g^0f^5*xEA43ggwZ+6Y&mF2AAv`T>pUhe#Cbm{u1%Ah&yy;
zkYwcu1JS2dh=<_bM8wx2J{j@t2vMq?GA-sTaEXqBAygFQ~zj;W1
zf(W7mqlO1ZT26L?kzH|K)e6}>KkiC{3nF}L!e&{!l$DA?{U2u;VQKC9pV!Z
zk4HQX_nQ&c;QA%h@gKx9kiHJ_1f=P=6!9eF6(K$!aT=a|1@W7>HyC+0A?(2Q%LrE@
z|2TxL$lHVTvk2$mx;KLQ+m4IH2!#k6MP?!H&%^Z-h`&d82-oK$Zb4o2OBeBek^T#=D-piI^=8EM+mA9IBYq3<
z4unT=eJtYfc;+rqhRV_JF{CpQmMKa0HR8ia?-uEiqI?PB;Rx->Uyk_ih}R>WD(;_+
z_}}6givEi5CE9c#|4zj8yAbhL;yM+u{Di!|i0QWmt0Eepln#f$Q54({Bo{w~K2f2l|&T{U=0I9tT6B7Ow%`v`wW7$Nd<
zQ11h{J`N#AJ2tXrgCN15peRP;3Dor^%X67kO`o}Wk=rr&+I7f<;Jmy7%J5kD`kzZ1_s
zD$?rXrwF|KQ{4ZJi0OU}${s)%g)mt6+~=aqXymOD*AI#KdBiIbUKQy9;<>Lx`gbCI
zE#fo9^=r7-gzKIN@8J4Lgs*Y^cX6L!ISJQyAY2G|dmz3S;bdGtftY@q@XT`v^xG)n
zKj7X}(Z61ZFGJ{qyjKx^5M^G%_3;SwJ6n`hQjihIzYn1|!aqcLy0=O#kN6pcF$nz-
z-V#hY@BW`~~5U2=^n;<=iO3F4H4-=xtso6Ga>^;(zPPsr8Kru^qL%zxZ#ax|30F
ze?N5}tIJQ&T~pmdkREHw)3d!XSeTgY;_gq#8W$d1dB(JX+hiy9kuh#QxD7Sl8
zPT|Q!_H{>tNAX)z#_>oyai4t6^y`cC4M>x(n|@Tj5$S^usL!9sZZ*|cL-(H*s8B)bKmPAc*O66F;G%Gp6vntD1E3GWL
z_b*b~doE=B?fI-2iXf$DLy_KaVIKF*QMv-g+R_Owu1{0?)RFMZmuGWZgVM|2fuHOl
zH|OV+ex#R!t-dXb+Xa+Pg`t+vn9uz_lum}RwGO^~`t?KDoeG(spT|fSqx*z%^7#+s
z_m5_`-Uz!!SALP3v80@Q?%SmDJ@^so>xV+9i0)qsgL2xIOL;mO
z7XN9dIhgMh7q=Ja{$Js+7<+`y41;c7=l-F3jfsX-Y55gMIOSF54jRxAtq;gx+lK
z8{*|*)GpS+_XU)4*Jc+Z+m7xJ`whn9J{LPE`qv8%ibHerSxO(}{-ZFG4xfZU6Xj>Z
zVYKV)Z0^IM=Ldg)@wqXV?Gev^hw)g{n#1Oa^z>VS|Eg^6m!tCWSq_$QWj6Q8Q~KAV
z(7$Qpxxb3i^)Q<9_PV$oN9pOmh5i0+wuXTkWw!PcFcul0A_u@2Vz
zSc%^|e~JFrNPJxJ6zudFZjIh{p5$O#7Ui+taKcb~PokrTnq+^H;TTJJ!OaqTDEDv6
zbFk*5Z0--H^3Px7U_&k)&-#OIDLrr`?9@-)Y^cE3Z6K7x|H)%{f_~#Bf!;St{Qm~{
zU(*8ogHGuAGhi^ScDlF?N$I&gk$yUl`}Qe)!+T_3WOLsVrArbVZ2wo;8h!Nm3ci$?
z+1%ex_m{s08s3@1Z4yeq1fzeS%gufFl>U4+@I5Y@+gg-92~Myk9ldPIhOYCQELI}=
zbDGP+76QH$0q>6U9W3d~eC`XO=V!nXH*|WgHvaF!h^zlBoBLMj{(^J|Tk}m0_kmFQ
zKVuziPO6*xkttm@4&(KPOM|C&fP?M6IG@{gbbtO<(D$MoZM@&u51s0sT$U;DdjLl0
z>h>&+zI!6Q<{JrbCY%A8kGi>TPW8R}F?8YkbJ=P!e!oUP>vjD<`G>A
zxI}mV%i}>0yIdN++wQ`6Z*p-vmFnM>=3tu-7UQ`i1~ckUty2_kb`-m%&(i_gxrSu
zRDQSq7WfUWk^Bim!?x~qYvXz7D$vho=#;?6517xJX60-1>8c_JYdg=y*ZTrj-6$Bj`;B-!vNUJ=xrsM)z+6oN2oIj~)kpa9b|7k?8*AQyi>nx*VSl%+DlU
z`M+EPzr$)bw}Ywt{PCD?x5)DMC*%2g;9u~|DmaxBb{
zr(rx>B=ET(1Hbuk7qpL2s$ofxZV$U`=Aaz6a-Yktc_*9Z-F}
zFUR<7&*JN?lpb3H|Hq%De7)^b$j{Ft|2uXZP4nkzkUY)^uMzBx*4UL(EmK>lf6+s
z89N0MPIj=&8(rL&LHEDE3;2Cb($fdW!Z+iQ`otCcz~3Iu)#T+ln7X?^$z?-OAw7TR
zX`t6z^SFPC(hottuhYT%hhdO6Ph@NQTPYN!q|_XZ|2CnX#RnySoL++YuX}#(E#M#1
zfL}454`8ui*D9&s42C{c&?kr6chvp>^mor)*{n(6=es1pcS9ZPg|^O|6ncni!QzPTn9^kHlI};rIa5%6ZrV8oF8{ggZ{k1&FT}B
z``<&qYCcWUPvcF{FSkm4^8@s+w^z!ynI}NrR_AK+yWIo+k{JLSj09{VbfWs3S906Yb{`c}RLc|Y5w@w;As
z!+gkf@pS=$@6lHQzgP0}li!AKwcM@I?;ESZFAAmpJ{jXb7V`(g$>+~hESSvOnXTdf
z{m~eoyCA>Bc#pXN{mFrR74+nr0{YFD_S8GMn6El`lK_8fVxFeo&76VsR>@xrfXCjy
za5M6&5&RqA%pY(Y=ug!DkFRk5j$FPbLidYrhdj~Q2OB1m{hGzsN9le%@V)uvY(};&
zrGG&G(sb}VQ37~(jA!#;tkL~8=qE|K`;)ui+#q!;96
z@paJG1pS_W4D{=E+5Z2aj}(o~=Kf%6Z!`3X4ev>MScUoHS_kuAWzbhbR-QKBc3{CU{TVr*pFJM<
zT7f|m_%DAE^gAa@lP^a@KWcv8rODqfw!zm_ovqQ^av$X1+gW_?62aR7_?Zr0_e1pU
zyFW*>zhB>n`s*Zpj;V)yyidZD1AXP-*-~D*GaW4R8rb`&mg*nPppPwtJQVQu{0I6w
zGgp&$=L0|c9?#e4!HEYCUY*NY#eBUw3HB81$pK<~FFG3)qd!AL{ezv*e_XIvMfzLF
zziF=kzoNZ=V_`U>Do4`~-!6c@`I6*+y$eCFupbwR=XWP!JmWf6Z3oV-z5L5*$;aETL!M_K3+-#?i
zU%o>4gW_`eULv|bGZplcDe-xAHRSp8d2HuEWjwY*-udFCzWv5@%)b-!*tAoX`@g>y
z_Q?&P2T^~+Xy|{AY)$`|dM1^JeIV{TcVNC$!`~qINfGER;SMQJ{s(?md?WlZqW@Q9
zf*!|@XRYG-jdhs6zn1v89pwkCkmL8eIPkCCZjHa41%A3thu{B}puNTUZ12em{Qp`*
z`wepVz7WF4ljorR3HdAuc%yXwB+N(M^ABBz@!BTo`AyK<3TGZmJX$G#YXbbcZPNer
zCg@_(M3?6OnnV7Z8>D>c4}5OwHJ+sjd3-JGxy>I*{q2{KkNZy^&-We?JSWYDelZI3
zQ_$PzAgKNw<2C#G?sU-O$Dm)qFNRG;`48a#>!ZN;=SIvA_^bN^zscB9boSSfPYDwK
z@8B;vyiEF+uUQFt`q9PqqCB<#-8<;dgKo|L_Tmrl&wP*j0-ihIuiL#^%B#C}LO=Un
z^2hcu(9d3$_n-SW=;1V}|21ugd`^}A(o@GlU%nz+qvyYZ4mNz1!w!r7O}!HQ82;WQ
zK~L%E-@%t9eQf*#^q=uLnm#q5!Mdm;B*RbAd0&*F~bg|3rPRU6LPt
z`UK{~`;y<>eLwgCIFj`SHvp_{FRu
ztixR>@mUG_NP8@o6@jl4{&!ykeB6}9_moq*y#@68mYhGUA+I;3Nc-za_}dP&W^3~F
z1C-zOwoBueo3jDXNEhGpK=oC>kM?fP=6k>>ebNcgzpx0Q(p%Hnu#fJL{(!T3Ltp&^
zW~;#eX@eYWUN+zt>FP@yZ1FwWd=DPgmvRa4d#c25+*Zu@Tj1Xm{eJ}VcaazJT;Owg
zU-&o9!1#)EI>x_dDC{p$|K0_luj{0~`ZoM$B^OJ5bMYTRA4NHue{k$_&=>4qRsYF?
zezN2VNiTQ24E*1nqv=cMUkv?a`*^;Omf*ktFVHU%+|W`FyVQXVt+zYV%3`
zuH+Ep&2jLz2>5UMJ@{pZ)MxwFf#2%rVKC&Q>jbHCbk4g89{!%X$OP
zRNoD!0N;ml*;tYO`gz!6mrH*B3+R>VuP-iMI3M!uQ|aGo1)TM|{x$sz^z?PE=CACB
z^`*p(SWgi3*FwMb-4Fk>nBQ-KKOOpoT))EfVkNrzt_1%r(4`Zgf49AIDYp-4edtWk
zNAaKJ_+lxCW$Mb0o&$MWA?dO7SKyzZPl#RK|CgGe|4eadWHMu}|9H@z8GyN^)3YFQxwFGeKW%626O4(cjI|-+kPPXs=C<
z-(y42|7&IY&p}^Lr$sbD50~8x{ko@%HT6*H|J{#d@8+@90-uwBuQXl%F9E+(e>8q~
zLk>0FD*0g*7~!Tnr9XKG`1!y?k{^Dvma!yVeY@d*bbb#(!PESvwd7yF1$`&oE9w8tL68^YrM`Rul^+lNT+riMkkY`vW$|}3Xg=Hp!sxw8wzuQg
zpzo(-e?Q9uKYVCB+bsCmb9*6g7s&OoN7sSBy(j(ow*Y_hj>h_}z|XP*!1LvJO<$<}
z81ubY`p1sC5Ax#+nI3@kyo%H0`r0@kCi8oYH{=e%`}#{5&v&K$K`$H)f&XBLz~9Xn
zkA26x_n;9o6xuZD%Xep0DgYxWyxQUJsGWqtp}HJg!A!gBcG$PKOaJWK*W&ri@!I;{wa||T{@~WuGnOSoUha_om4B_md>bwC
z|J7eW-`~phg*X0&_7=M||NTFn#CY_U{*cw+*IRV`U-1CoJx_ZOaPaGf&$87k>tP-Q~iXjWS**!LHo0GEdVN_o2SL5223<_&Y{{ADrpt
z?;%qE4m}0=_yca$D*l#g!i1ep#XaIlwv&28lf0mB*nbiV+Z$Uro|B~F_F#zes
zI(%#ay`>f8v2}tT_k9ofQV#hb=%w{ajMoq2HUHZs@b|2ETdwby0KVe+ke4SZ`1{{D
z@b7b^eEQ)%=-XFFemn0%*ry3#iU~^jgXmAu2Ih4C`}?dPw^E?OD)g|0MkZXJdZO
znIZYp@EL%QkcNH{e_F5+@SQ2=bNyuShxg&n>MiI2>$&VDtUrkHei?D54j=zmi}rNs
zFIR%!o-Wtxe*X~W`);Y<9gBI8q$@vZJLVVcadmzaeu44+r(4sHcQ1r|+bZefec*fN
z{dwAY8@+_M=}~Df-Eu1E`!M8H4+Z|D|6x8{B-fkrThKq~YpQ-Z9QO2<$E3U{hWzIX`=x)Q;(5&1dD35X@Ac3Rbp4s~4Ei$!`&k6OMje9=>+TOc8uGAN+PlBS
z`gFnH<$CgD=vNzVlKRyvr#e^?_J8db{3Fu=e|cZGHow+B3H>fr>UW7}K)>1{@wH_K
z@cCAbw*G!-ImUY)p6{vPCl2%J;O7$GYbQbf-6o&!fWKq&s$BkVFwy6+F03EE?dI=J
zQu?GvAfLBOedqGw;7_G;e?rL^)OWA+_a1c)=!hQuBVPztT&dO#^X%kOJR}
z*gv)9qZ~Fs%x{ph_Cpb@4`RN&DdFAdhW^?sSCgNu7oxo;$`ow
z0eGYSJo_t@e^L5Njslih|xfL<5hEBBABT?Cj`N_(W@KG6RGsZX8<
zdf2NQug4ZZ-s||=j~JS^arv5j{u}h4X%~SXi2lzU40*Cs!avUi`|lR1|M*5jpZk?u
zkLe4+xKr02*fHCpOaEdP@b`$UZ}*?kpY3i|An0Qu?5_b&%K14L{AR~fke>pcwNC&)
z@3=MnD;@G_$>oxNA2l2B=-Mm#7vQ-{>W9g7u$QpDrOwA*fNv-E4?!)$e%E8!*BJ2D
z9TIk-?mSeuZBKN0!
z{37hn+0vhL9rWkC$-~TS!`yyNO?~Q_dO!zJK6Nu+a9tZvGl>5sj{T}5%aBJ)D
zhi(9W)zN29jPGI{eCI(w>9a!OyKWWqxl!mJ@J;mp#4^xZmZblab^@P=q<`k`-$Gu$
zEA6qN=cE6Nr2Kq-D9S%C>xU+w{gQCC0^{?;By=iHY466rgI~j*Y{flle`6*(I#}B8
zk3a!gF-O`1KY>5YyGpLF6kvX@fYZ(=@bmqdus0{>Y5wrrcfsD7E9XBzWlcJKY~Kud
zwpG&mYgkXqbbz0V`ijxs)@}K0|LJ19Fu$Abl=%7q>qYynm-^1C&q42RNq=5xG5m}F
z9k1=58}=;hqpdDY-jm{5uY+esCFbLC*#CO6(%ue?XNm6qDG)4qC(8Nw1N52J)pEUO
z5Y|hY;lC&r_?!+pTXX>QEb#ps=>Iz}m-^%>O_&e;q-FOK$01)!&c^--FbZnF5DYC-_k2F)XVnZzj~6UP`|n8oXUg~B
zCqAivJ&fnmbkAQ3`M3C1S>9QM{(%3L2z*}lI_Tlu9JWR9$EE$?&p96c2XTMKE3hwC
z%Jtc0@zD34l=I<>4$R*lb2a~WzcqmO%WQ4`?RL=1(oD&3E?ENm^i=5|y6fMNUs!(u
zTju>w#r~kJ|IE|$jRNRnNxJ^by9)bHe<%6R;IV+`W9&B(_&oX#=+8I_uLts;Ddgv+kZ1?BJ|_j@!EKggFkZN|K)1?Iqw1;HxHHkI1Tp5zBzKg
z`7Y?UC1d3J(5dLI`Wu9cZ)?EsU%Zsl1KAT_Kp$E2p|rm$PK5vTZ*sllF3jJ9@Hfz}
zAHufFf&aen9|(P7^COV2KG;K|KNmr8CSE4>$#wq&f6UHfhs6AA{Q&US$^EhA=ivRK
zC-SxZcd5sN9xLU1PK814`bqM;f>N~iIP@o!BmA8S`bt?N_5TUT-}^|e_CC?W9|PWP
zQohf66ZE({N7H|CgPgf+pfD$__$k_rtzWb<=CSPG@6%X+xuTdPCi;`uxJiiN5p<
zx7#_Q!dqE7tH$Sz?g)rimP6pn}PTjV`BqcA%Gh@64fzua8>`(R*Xqlt
zu50v^l{R|Q^MQbDrH#pjrhL&hpg@U&3FT8K6qe?fy8%w2uf8U~ywK-#mRFTFczg||
z)xJjD;`iN!W%)&(qBKvD)8oc7qdWx@3P*UD=d#-Ct7|KOsK!|{ysUD%*Vo7zeGN6<
zT2^^|L$%LKsftM#O#l!r
z5z;5CC?8qjt!?yrs!AKHe5GYI-uyy6t_mB`-4VV9XF3(YkhtBG3q8(s`4H%UhN>Fy
z12Nv2vr20`zB($2URJx^V+y@B-g4gr?_4~Yb~~K0BU442CKp6sjX9>
z3kP8Z<>>ivk8^l7h_TRD+K`7KwZj++RTqx*IMYC9w2Sl&+l)pfPtwuEU&$nv^bBvK1~HH{FMX88_+^*KjCai}i$)Pae5
zOK0Sl7kS)3Ec62!t8VJg$TDbOwTdv)K(J1bQ0QfZx3l8aCbASUDUTSd9|P87eTP=v&o_KB{Vw
zs$SN$9T?8_Fs@Tn*G~693x_I?acgu}l4e=sQ+m=kt4!%yX+;IUkbUY}c%1=+rxg-S
zrS+#X8m0g#v@uW`yAv!MkwJQPX<40O8K^|(-x%tJ3_ms~m^>7yDDKGVv
zSAp^<2b@MtOM0^zvQXGoIAX2~$;Q7bXy6rki0s*h{}Ck9H|FM?A41b>C(?B6275Dn
z8>K7IrYPO+P?hek07+H6-3C>#d#TGTBD)uwKqAO6X@!6%&ZAe;t@6OCA895DJSoC+
zf-5`q-D#^ft9-qa+ltZ_3Zqvx{Dj*&g>^NJZgDlnGYL*da*O28a6|R3D>Be*=k{H%
z=}BSKxozlW)EE!!vC;G|e4{LMJO@_)V%5)4%BTseN^^c#k*SHj0G!XF51?#>K1my*
zGkGPu8$^F%th?a?RC)y?+0>S}%2b@g+LFegp2t-Im0mu}q+m07-ZH#~tP
zoDFP^Hrl0RL0Aw_B^a#@Se6Qq8K#r1uoD`zo9c$;kr4YyZgc|&B1aa5tm>v60h$+7
zVnqnu0F51r43ov6_G&`Adh{gkj2RFJV&&DW2>W?z%;Y-z=QPKqp9f9NIn%HfT3YG#
zU?J7y`SeqPrDbas{uOJRZU@SLBJzlb5b77aNjaf9n}a9{oycZv0rMk~G~O5PDo9r{ce6bo
zfI_wY#CoEeVe*4t-Hfrv@R6dih3TWv6cJqqZBcG8=A%^4$B^-Z-LZ8uVD0DFc*jhW
zF%WJ86G~@79VuqJ4#G~vtKV&P7L0cC=EwkJ9dnpM5c=H;UI0y`b^0ga-&6EgGeTN94E5j6cqTGbb9Ga#YtNVCo{K|B68(gvMmHlUc+!qrXhfnU!t
z&$8~sxY%NAq5;dikA5X6!0wh9v}WhO3bO7L)jpwcdKXngHXSp*reKmJezu3}1y@Yo
z-mr|RwbyoQdXc?F5)3Jp-iE?KNSMH-Gs3z{s7w16p3W9aEL+qxPI#I~>r$+kWf*^h
zM85wK(STb;MliK3hS)aCh_1x0s(dL()dX=Dau%~ctt)$v9Vb}D6Yoih*Jrzr(!Ddu
zubk!A>fp>f6n5rkTcCcXC=NN}yU8$DEG7(XD)P@ABdyW8wipEKQeOi=7Q@RdT}0>X
z6gKKlV2y2ze3v@ZHQAWMWJ9Myi}TNtyAWjx;VK*54b5gd^Q-Z`9`+PZDD)`r>eE~O
z+B$cn^dkw}(44+%3mY+%j4cVJlP3XtHjNI1?u3ZTJ5J7pjiYd$LG|=nI%wfYL6|Cq
zjv$Pg#gds+Z8r_-*(aES*s2&h{|uv2D+^UlrO+5c6abr9AP&43U0yc>?=*TtryA-@
z=&wtmwB7L+!P&JsPTKV5?Sk=uybIH@@he2u69(#SxDRiXW|I*N6*#|om9eF41wN*bn%~L4Op=$8fdm3?GMd@^}LESgN5TP|&d%a;2
z+OHBho!Iz(StAZ&z^PdyMyiR+>GZQJs(n>gc^m4+HB8T@(={4ql>^l9SCt}Y1M2N$
zgmgMK3WIHW;f1VL+=bEt2}R{by6G%1u0@g}V=!CC$sOz&9vlT0$qx1-tOvFtL3w9W
zW2_N?ta74|$UrS@Xh+RdwYUJh%(xmI+JiVvs`R+PsR9+2)5|#XE=Y3e8jBT{A{Q%z#i(KOCP{5
z;Jz{`#XlF|?X{u^gru1QJK1$$uHK4YyTcB2M><4#yhlvhv)OHBa`#kgCd8!xt6X
zWmbi(m4k6Id|o;zwbqxfEiPLSV%@?cu^;SoFt>X)Y1B?9ZwFo@_?7}RWxMDhTLWr>
z9_4}-NnSgL^A_p&`+8qPfF`kR(|k0!uEF8D^FnZKJ}(4W`0)m9FsW0Jl=GiM*@O}p
zW9BMt7I5qrQ-h$zTyH_S@wD~;9S?_a%*%4znMv>3o6nxWo*Hcw>`tl)d~TR$x^ir+
z+%eIqL~XyMrMkH~w<-G_g<-VSX($i04$!H6R=saw@2%4)Cg`DsV1e4kResRjNaeJ<
zfEr>nd@wM*#)wW8MxSML>UHW+sY-@%^IfMlA$JXgyNNq=au7O55Ejn2$5}+P9qS66
zhx5gYMq{P4c|%_-El%+-nuNi_sb0KZ7m$gA$BG(lI&Ss-MnnFA5n^pC_IiM|%@`^I
zkuBf_Mw+G0&Qb*|6^Ee<*btJTy3U$HmP&Evra`%ibPic64r3A(L6xQ@N}aN4`XGiF
z(F|)zll~FqNf_rLPg#^jlD0p)4l_jr}?TU$hr110(S934Bn=Q&(9DH4UD#
zDxZ6F8NMS0a{+QPSXJ`)j!vpicW^>fHHuS_ol*8L0!un7u;oX|A{v*1^5|OE%v1G6
z(rEEVC+c<%Nb)uE;4o=kTm$_}*?z(q&4EZE5q@j
zwCTPokGG+ru0hegXp3JFeW%56t_pg75}>RSk95i!TuH)j54bg}yQPItD&?1qec1
ztBSoxA>_pls}9j==1IsfXp0kpb|{(HC!MGo>E6+h0ka|}bPYD-MyMkeNh#Y2#lsR^
zTXq8upo~5MMdc+Su7PIf1qc}un|E_<;c(J2klEKkhr;T?7$x@*Vb9S>|9gmR>>#I8
zmHj#8wRRv>)9JOm5Z}dARHvx^`u0c4_JK8QaS9m?8tK!k{Q*Hy6g5lMlgI9>g0Em#_fhFn(mZbZE-hb-=VuZ4
zEvxFI94R=cSa}S~^~NJOjnJvz^)*AJQM<9Yz75;H%<#^@XG-}%duQVFx|7FvJkx7u
zfmr7_ogREXioR>>sjkc~Zp3#_akIM7Ln{(^rO!RuBfctJ<{sto_~zErSKRPrUM>lZ
z-}OTl-_1(rG1CVrN!@HzU2ps?k_o17<(6anD}B2c-$-s?mGxM4@!@M~a@rXCsiGxa
zQ*D*~V0Jr(O&iY${Q@-+Jgw8*(*>9F)HUEcc6eWEHgBEY6REGKFaF`Zq-*K>!wSMD
zPtZ$gbeCRR;uMp6xo)q9v7hRA>Y5Gqh38JJi?gulvq~E(Jf-C`XH_?Ng^sV}H9)rE
zQ+^nw06kDgDj3)Ss!)-j8@a%>)z9)7WRJk9_=c7WQ9A!Xv}eR{8o*KM?gA&sy|BoY
zf2k)Yf1(?gQ!mRdvZE-aBFzK~wIZA&UC!yDey*OfEjpLRk|HqJA)t+Wj4ftC#9Q|K
zGg|O4{(?YI))z^KpsgyL66d{-QaFhu2(WC$Y*mhdG26Rrn^6fd*{@b;!}mC4GpDp-Pe*rFd7^mU$e(2(fOxPOR>OTHnA)X4*#FyWHG)^
zX55eybTgRMtTt;*5wAjzNGq+FU0Pf2#dk}wjY2sAfkwjc3E|*#A@&{`n;pW+zS;@K
zB_c(V5{=l;evM5Vzik0Uh>n`o)0RQ>syBjRb)@Cr!>F7UM&-KxvvFx
zCJPFBG*}w+XeiS#mWzayr()f@&|@VCX(|QV&tkMnq(T)+D@1mN^@WJ(E*vNr!eKBf*l32b|D60>25)EQ`Y1x;|51^61UjBFL_i``ywSmsh|SeF
zo}v+|LftI9Mycw8`orpDGUS%UybuT7849Bfot?|`T~u)qGCpk0U3;M^(JJ|%Sz*?m
zv=}fpu01;i2G%d6q24N{V^o6ruASz$1STkEw8{~Js#mcEFQgeJbrlQpG}M#LPxu+0
z4gX7_ue5dkm0eu_fX5G5u=njL3jqz@j`ECV}3&3J^+(qo06w@agbt!xwA_Q!)mc7}{21T$_elnKxKv^)$k5
z>%KPprfiixR2#`0^{4LH^D3Qpt25MX!wk}7ODFMMsT)VEs|Ie;s#7@>1!L8HjCT
z_AI26mdgFlxv2T3MHHG;iU3n7>bAp@aRC6a`UuRrLd3?q|2d4sXb40;cWEj^d3P67
z7v`GYKQ+*^=2-EoPudkXennBLPu5vr#eQJ
zUGr-8!AP;sk>0oBd6yV}z?taQ(U6DzAyQ2~}Gpos2e6
zm@8HxGH`Bs>yF+KGaj#PCrZJ0lrP=onxbT)QA;gAPd3
zo6#27HPR$ks>K^^UJpm@5o{TZs3U-$8Z)X(A$A4Id1!9oNVkT8sKP$8!p?xx_s2ke
z`e9LwazDyOkf&5a7aU6CpFtk8QhiWAp&sJUkZ<0?rs@cRcd&&fC{$g{Lhn`?)oF@<
zL}YGb6A)1Z87S#vaT*tDgFhPaibH^wtjf1W4G}a4VPxEuqt|vl!iTG4V5?+T6?EWYc7t6&ib~Yycr*f1I5I#b!AvmLOQUsDCCB`$dNESV-*Q
zi3J0RJ=Br1kC@vg>Wc%$uUYsfIR3JV&02`P*s@qm=#N+q3CsulZ-`dkrU`-hb`h-p
zc2fr3vaP@eH-zzZm57$X)V(^qPHkZ}s?|A+l~R#uc`T4WS<`M}Ehsw4A*5~}_3IC^
z%!C-ZZ;T8gBj_nT0i7?A`j9ZtcN&bRJ!7u~rcp0#lB_I<}bjW=A6#VQ~IM@WF@36zFKoKK*9O&^%wz*U8In
z)}@hcdVe!pZbRd~d2B;=$AZ~`6Uj(8ziu}I@q>+mGpeZ-8GA#DdDx#uhgQG#Q
z!TUiZ&s9IAaP^rNA~k-dDZ-F+{M>thn1&a%>2My8{-huAJy6{;XLwC$OPP|A;R5ov(1Jf5WyxRWj0THx1s1geeR!Kjpbs08p!9`?KD8D_
zuT$upr(COgc9`|NZRsqka4lJKB~d1Z2uKiX`CY6w5KMNg>Co9
z@+xonb;egr?Uxf^Nh#E)0iY+`09-THCYa%4HEx>^2${U3-3TfOFxt<;#TuLTj+h`g
zvsi%^ff(5uykTgwY=+i!jBGc7$%plb29G!oKI}&v8UM!7zcB{E(@mJO{Y;$U9Tgj=
zw-mB=yFWUK^wfD^JVdy=&}98?Mfg%x^D)7WgD9nAo)k?4q7^4i2NI>?QAK_#8cQ(w
zskJ7W3SVe)V_FfKieDn!CN)8YzLKm8T~eyR2N#c`q}EhmnP}s*GmvYf4l@Iv%&_5;
zCP$u?jAI2EXN^Z1M8pm0kSn+@@NYEzQ;r=`b+N9)j~xTlZ%DUcf3X@WmE&au5e%_v
z-{vU)@R^(@8}thhWRgT`Jm~6C*k73DgYNlmF(B;P*SJ;WCWOA);9SyfR7{)v)Tw80
zeCV3ffDg=SjZOMlDKs!&Wrqt|0=4LzoS`li8hCIBW=PPEn!iD0$CM23f{9}U44sO2
z1A?JD5$U$E+?x=vMzjR1T`>jGDS(Khl~#ZE~z>k6FiCqv=>1ftm+Qb4*#*A0;fh
zQxjtDZjZsPt*%@MF|MxWD66a5fD1Yi&Oh}GJM%esdbSr}RAPI^NSM2EVhfBBt0r$x
zx1mVU7LF8XRMr&g{u+NA#=PVi;r==sT`v??V`uN%Kor5-9!456xs?Lbkj44cIS?uI
z6EY-z*Urq)z!;9?D3S1He5|1{4p%m3q~RM2>e7~*R_$RR&`P%A62W2(JY|7gBI>is
zFoouzP@mz305ulAQv|*nT7n=8R&kRBbBb8n)bwXjfhK-}t{-Ss1y}ghqRtUGLAN5<
z0jvdvlS~Zil3I}lwO;%g1g2V2BUBs;Q(6(UxYN%Wzp6cAo<3InQEx1$Zv%}i^G>g>
zH7=_}R#RB2SZw7i&Iy$7k9n^1&UH^Ud)5MTN;_-CiWQUnrJ^+OMop(v|88`s-yn%C
zzQi7GfuraM%IWmZoP~8(ey%(AIq7{w0ZxpNT6%1|s&0dn%tU@lmcRTI+O!GOOQ)4m
z%=BbC6O}J7U@awCamSsBEFunbR*U4W+Z<-KTs$Az00W$n$>d<
z5{7lWPOHjlyjm%M7O1UF0{wR581M`Ro7t~HVr?OmPT5joly%y$a{Y(XxT_`<*uTg^8b!?*LFbDH~6`+k?t4L9cD%HFLH0Sx=TaROT%_n
ztB-e+uiDbQruRkTs@7@^uDEIfQw~U&j%u|r4RXDtu#%>s!O-2*=GWR3FSYqVNXz$Y
z3|`$RoYZP3g^xOX(@XtupqtpeSRxK}HClaEA@R_bqt$0PUDMKc_@-Cb>(Q1rHIgu^
z%|T5st*Q_xSaTdkX?m$`YEbJ5*Rfc}7?xEMHu7v)txp&
zZ8GD!7r-&3G|eupnT2l>VVpH@kbgq#QuIaiz6Z#aue}-$3niYTsoe(8kjiudobX{R
zug8G0lhyjDia)EIwwf=RMx7t*%t3?H?WDET&|8O?oQ#7ZoFgV->$K6|86VWtxab8H*_+W
zR&k(rQk0X6EMhTMm^RzCM+|7Ts+x7|Ls1G04wk5niSefusG}2%pPP14h{poqyleXx
z+51dg3!`*W1>kc!kVxUP*T!t
z?>210R(B)+Zh9E!ZjSIMW5sRsNwCrJVbOvhn^C)*66O3*xsnY!q^j99Lle=VVYXvt
z(6?=R7CYNrMW>O;3ui_IoIQF-BV^anH*BRbTcl`?N#$$1Dgr|sHQUt}#cN=@`&HA#
z><+=U@MqzW2@@u*4f;|+&@UN;YgXAJG59W9;KVB6!hR#nKo8m
zZ7JVDLf>U>Y^-c#w+Gf6(;8Aalhlr{!p-jC|6;saVOmD<)kj5(Q
zWd0DS+$p|cJ;UoOttj=C`uP|(5s-jAp!OL(!6_uX4@$(_MHXm>#W+E>e}mcUAFN_*s{e=6`tYBu@L7}r^49D60B2U
zi%P(CDkKqdnNEelMm(w$I?w^J+7TnMeuu-jb=Uk3gq5~!j&rfcS7e{Dv(a1?O+a%I
z7#NJY%hY6$Xob5RerYRcFc<(I9AKJZuWa8^3C8|0{rhXSk4y5EdvcZU%J@a2ew|;LjkKAd^W6g0MRYs0
zk|?k9Ac|r)h2bN3dacpg4|Pxp#r_=R`aVVZ^0D2#?slM@5x=czpmbH_YO}&%sYm*i
zC$&~?R?`QEET3Sq@@E@WMCSkq;p4pqO@EBdlKdFsfxEVN2(DyC4JX(3Lx3*^C<>&$
z#-KUXb_jMFk_}4j1!81cXt;zC5+*tmz)q!z&IBxKdg)AHju2{c4p+_yB=-;;J%+dI
zx}0+CPPh@yR!uXp<#UQ#R(Z2hRe>y8SV6h*G*-J$rVA5cxZEbEH5^fWnss0hL^iIY
z7&8rfC0gs7EmFhq0%{dSgd`h_C%9f8V}wPvpciAH>{K`Vp&P+BbO-Zk>NiYwzY1tR
zYB-Etv|XY&M|L#1-=%YvPZp3{)TW_ZScUMz_-=+Jwk$sdDe
zqmf?WbC}vZ{%)kv2rL|x>QCfLr3Sa%hk*;JZ?6ShgI5J}mkfvM5my;&
z(9SCAYLLF#9{u8XhT4%tx-(>U0V9XtORtmaX5z
zc(i%9lUo4iS)hYNB;V;08x3TqY&^H(%hnL1HzEtid&>A;I0LLi!1yklFhWCCW9$qc
zC}3n_XfH=2MeS)$li;s(tB~2$Zu^$ERg=|ZpS}v){Tmi0Sxq>Cs-#m`_(4m^4)!%K
zAaQbGj?LnQ*vp{`2f@xk6tHxY4>JAE!P_+!l^H|O~h}5j-uQOX-Ks_RQ
zG*Wy}5Fgx8J^%v+H&k{4JjNQ~)-M#OO52Du)4H?A{N=t_FZzx!e)h{dquw{yLm!c}
z{Q#JiZetCfty*J38te%_8n=GZM73BS@p5g~W6Y>0nOdTLtv-agf8y2y^xAqQu8NvhOjr+Z7b^SJr!oh?ihZwou_ox9Jh~5@6pw@KJWAf
zoD#?xY-z18zieWe6I&bfdsPCd5gKr|()ox`BBDAkAd+(_g1j-_4YW-TeUBo<_#vF8
zcde}QAOt@Z6a*Vbm8Q*heJ}lZxVl^_@{?9LvfbC}9Wyq2sbemnZQuIvubjplo)KX$
zNJdPXMsm>TGUEC)-%k`Ce7RvWs&z0~jiN?`x&9>|*W^_APIJq;SIENBruj|~le$_1
z2}rSxRkbVSQyK^%NLF-ZhZ;jkv1~<2oM+MT1w2ZW@la6
zJMVS4QdoNsRE6Tamv!2}5y;7CC%tqF!y&vM9+uf32o#mS1XyR$3!K(~kH#>JYmyd?tt+t^hW6H#UAtaP;XjIDK)J`M=%y!){u1cET
z+6wDYV=?Gkc$#_C9SccR
zMBPe=XFAFAtE}QGY8a$)#DHil7`f?op?x}33n?ZyaY{x=Wi>>~5
z1O@Q5Mblz7J4sp6M1qG=!m!i%PQn@>9`pBR)jgxNC_RdZo{tudN*RG*3d
zYl&8_5KRk#EVvkvxw;WQRzu_e#QvZ=`J%z3+O3#OAD!8$(@OdBp{Lr1eKfwhhWv1M
zdPLJ(>xs<)sfBjy`u%7@=W;TBK92FrwU!2KWE?G)>QL+9I}syO_@h>%I}w(Mb?i=r
zAzB5z6G4ewJMTmwIz@HwOn3qux6#E@$M&TYqg`X3q|u(BVb%*`%NtaQ4&z?GwWF)i
z9JPVfiR2fl_0@?Wgl?K?N=qbUXd3*e>Iy-jjx8leVCLD47=Mw+{1%VNznURkYS`$7
z*Q$*YHhe3v(*RM#Yc~R*3N&TQykCm4?1v6{y~<4IA=M;_Q-jFD;jG$cr;UouLRYuI
zq0V=1zG)Cdgwa5DnniC7X^f;v5rSDH*03}`4Hi8De}ubhhugen6?%4Q*E_|3J-1iAnou`wuj{5gEl$kwPr1rwl%{un{Ky>#Bd0J#mW)X
zoK7^%H;{0W9v9egVAy&TBf!FAC0&52;QX@-fY6DHn(DGfUq$)3=MHPE8#ao0TvtvW
zHz_~cGckYig#5|5_(0Iqi7wBiaRp#ZMVC)@rg=Q&bLNzmRnK+~$8}@1r@RJV0`vIh
z)_XnGwUu?W@uLWz&vH6z>ZVthd%O(|bq!o?Gv%dUUsdW;yx+JpyvAEQ-B+a)ccoFj
zXZoBuo_cRXV_j`&O*PIDn4Ly-7n(~m^kD{pkK4HuFU=Y^(Q|1|PNBQVQ#3AXBF?J8
z{psc9o<=I;q*S>mjjoh>pmxov#@Dl2!{xZFe2;V32&Ig;Ge)0cj5$L5Q%c&E
zPYAJwz%9iEsfAh&za=|(sU;;}ZL?>Hbtf?O3AF}upL(($w*63Z&F~~JYCG_s+NSKi
znqz#Y33wf#BV{!Gl~!9w(8exUN%aZ4Sl=skl+9G3mQiE7dMM8bN3hcVYO&8Z)@@hD
zt_0-~9R6VOmoMv&6TR|ZlJ3a)XU2l8Nx6zgX}dpt4OE}8H*#dG&l*cA_s!6n({@l2
z1cBO7>ri8rZ-oYtzLXJSW6$;F=vu{>@fmX|x*Nx#ryj(!+!6;%#HeQ?l$4|%#W25?
zioAle<{ra<;Z-7?SY3TA!!v75{__#4+%bO``l;?br2SXzNvxzT@Vz
zalZ$9Q>1-o-`kr3n!QNtf7>3I!s0KFXL~6wj%QmazA~QWd|JTbua4Ij4S1c%29E7r
zwlp<)?}PDsJDyKXWSiRuvOl#CKh}I@8Q^`seZZX$wy1D*bnqclD#w57!*2AC`jLYEC9$W_&-Hk2T|4!8
zDvKMtcV*UMdSDRC>_3&okEOhI50s~DdLS!haAG!FkQm20Ito&f7nfkfSmIRHcPwLb
zo9P*9ci+mCwS}c@E}EX&R8g&P^;}|L(}_pFE&}^}c&P$5$_wnF!iS>3#Qu
zEmK&+Sb9GF(}gMYJmo9!WVZp<7Qos9SX%%q$J-F#jrv35Mg5`iqCTv?Q`Sv&t_D7)
zvL0ipUaEip%9XtSskpcKf%zQn-hhK(U2>N)wh1ih_{|S2Oi{)tiH{LoQyKl3sWdV0
z)`cwt?ogiX%@mqY?`^ov{vKvmN-s7)kzkK!P5nDM;?j?1%Ye^>vD5~&MR`ATEWr#^!wXp-YJ=iho0Nlwh<_t9Pp+QaxMu+X#p#rSR&I8)(7UsxQ=8cJn)g5H1Xn4gk7BrcAz^
zN5cB{lmun|fxc6z4=kZ)%B{7DeUgVLIKe#YmyEv97*U>nTu?^~>S#e7d|sp_DEOsT
z_4uW9ceiciJ?p`Hlz!8=(6}cjwJGBU9B>*zp9ePMUPe-!!UNWW2do1QGQn4RrqEnd
z`GCK=8C%z^)~}9x*maBXO2K#~VY~*4@d7>Yxg*C*p*O;Hfq}L`3>YVB7o2r1o^L@t
z3LnFG(YWmHiZ%?qU*bQ7DSWUm<_V3%QEdL=A#8DTI?@xsdm!BVH6RWPR
z8OA5&o*tMVlxD1A4J3TZj~&^Z-gkpehu}Wpo9f(3HvZ>3-1y
zJs(i=2WaILK9mH!DfqLNC)yY&?t^!)O=n5Vh9V{&n*`oNG($XZ^ZK)mJGl`bL3*h4LF9Q5&Xpic5
zFt^hGB*MF~{?9uUIAGjxcn)CvRCp*Re7z=UA`Ntzpw+oDW$=(?8r;c?3FZP`{$!e4
zt(kFYEbSiR!}LsMYd^@=k~mmkXHj1VH^p(iNTqePP2$b|c+nyC4wm5fQ#*6K(EbB^w*4@Bru`7x
z*nW_0Xg>gY+Yfr%2YTBp+MR>CSJ2!;|3Jr#XqWIgkUhqATIxrdu=!M)qJEJaN@Gu@
zu3^iTcXY(3PGT5`Jq~Gs^Ju*$F2Un#l_7JtR2X#_Ru#*ju&X)7^Hy!aO
z(z8@Y8)S8I(q0YTFsppSIm=Z{1qz)P{PixhnCjZ@n?UDR(^SJKFpDKCn;
zG65U)`TdSEPS-Axez#*8PxoN6`cWM!oNsr`<~+fS$0dO8NmHF4qE32Nt@C|doz&Mn
z(bhX1m-nZ1f6SR7d`uNu7{XR^x}tkD-a}2}J=8Sbm14Ya6XVqcnp4J0(w913$!W;T
z*c;czR>S}BVsip}skt}X+T4e|+?>c>Nnz|G$k&e{U-zx)!#-K_pN?()6WM3Tf3cay
zneOwk#u#(lgMPKR!mQ(dI6+L;Q~FA$SHS&F4I2rzxNKJ3X5Wp0!&~
zJFqDze-mIXLcUo~?Z+|`Xe@Vt_iR8Jh29j}$Grh|baogtF+`);s21?37VxMR@TeB>s21?37VxMR@F>pTdOL)D4S8#>i~2|S
zv6JH^g3m4mpKWC?v@ZcaTnv8L41Tx>{BR-oA?OJ*cs*pWVz(8*H?(tmQS(5yEVT->
znZ%Z-PQ-QcCdP)cr&9-jC)BXz&|NmB4q?xv)}svZ&?@#A@LR%pD92;kmUtC5BmXSn
za*BLG#19t&H{eOjGx$8vKi>pCv+O>qV~dd4(?E;V*VfFkp&OA0aRM4Di7N)K%)FH5
zT@A)G3H2)DS`ya-B8h0RSj^E@Ht0m6JF0`8=Xwm5hWhkG@D11)6zXkUFN?u+W8$~9Ch1($Wn)+tIt-$%~8g3Q%&rO`dE)uw<
z`BM*^Qy(1cJI-?mhdY78L!5W0IBbebZp)-K1}>j-IzHk!!CY#=Tx!8wYQbD;!CY#=
zTx!8wYQbEB-nbrmqcU&XpqJS3kPOHw8dJjY9vyEW`AmAy;33&U4=jnx0}mmbr|D@3
zX?p%bKfI?3^?&B)UMueH756w@nRv=X@Fx|vpDRzv1l^hW3a8n$CXTnUY!$yRX_+yV
z*Su`QYo6-_Uh}&njMv=T$-IW<&@Rj&759u)Hf#75JkpByOgum?Ht0l4Mg1f{cWlpYrZ|HwZ{LY)sG2EBu$7@7w?jW_
zPsBL&L>~D`T*!l73>~QrYcp-j&qf%E>okOPggk@-gd&7u+)GAf#GkOv!^hfAW^B{R
zjBPsKD#;Me$Km7MjJDOfXw89a47%4t*qlawo)p(xCwoY*QThH-UH&xmk=G9xcP>Qz
zCL4LN(9xhTbG^P9yqEO4o>=!HSZHhrM(SVckRBXXyZ#+Q|6UdJv=1^+eTM3w_CMFl
zELv9({oCFV$K`S;eW5xrwtVa{zAYGEKK`pQ{=AQ<_W*R+dBWE~ZBYKFXq#|JzKbeB
zBTGOdi=jt1LyukrJ$fPZ=qAw2JV7&xPj4gE5}Jf8Z{z()VjEKF9{IJBlbSSL3i@_S
z1{-`YwKs#6^e25w@Ap#uR%XA~83OMI;d>z)y-I~$$U5q?(O1}{jk7VoEe?I+xIw?5
z*KB}Z^EC9Dr=ZuY2Tl}RwE|b<2cvq)57tC}G|-MBU&%%=`orE8{xH?oSjN*^!Qb{EFCAWwjrGaKh`GD
z^k(og!p#<3<9XPyMjQTs$lnS+QOx;Orh-f8`c$55bp6`V2KYNDf16fb;gvfvKPaDY
zpf69)Q~nOzcLBcr0v_aJt#@+^=_xdqcWL*rA(<|0^OpR{BBmu*3x
zw&9xkvK8~1o~8L^2eUsIl0hH5Ja{J6qrzf5T9}^%T7##uhWkI
zCXHIj>!!Y#ZIT0cegg1TjPd_+O5E5pZoMqMVAM@16F?i3Uoz_Ql$uf3rzAHqRssIT
z5{^e*C&ayUMj}f@-?RJnV6@MKCC6dKCk|_kF4i=F;%t`66#i}YpJ2@&`xF#oC^ZdP_lN~zwrP2e%#`1f4`AxvvKJm=pA&s_k#JJ%P<$8;R+e%AZBrdIUYCQ9)3_W2Ex?XW
z9SVBL15GA^9{OVplQGsLd&!qfb*I2TP4&|AMCTOqHc^i9m3f=ftoo)G0*8k|?>~sT
zcv;jp5b*GN63T{>eoFPE376;BT#l
z=s%RTngXFmt$`l38hX?U=uvIZqn1LCY6ZVt5``X>F7&8clOC0T^+UZLh5ZHE`k|^v
z^~YR_R*yoTxE_UmVQpzc3;G3J^C|d^*28b4jM+Mq9<`A3c`=XDLerz@nai2jcp?1I9FcbD
zdf1E#U8uIGzxv0LJn~b28nqo_{dV(u4|H39^Lk&1^V+y~U_ERYWnE!6*(ZMB-bQ+$
z4sN-3U_I8y6uhYW2MT^r)_B*%gKxyY(7q!6+4i>hXWEy>Z)|Uk-_X7!{^|C`@lUlk
z$FFZ+6tAvn#E&I;xMzDq^X~0ans>l{UgTJoN;)gidQto`=)7&vecP7%5YK`BOJg(-
zx-ik*67W;KPreOhD1Q#*ky#&9Y1=$+D1V+-UXh~1
z;sz%zz~0)dv=>RQOsW7cS^(eut&4lHtTb9{N`Q`)f_)-CITRhI-m&aHYQKljd1^%a
zr>XWdY`LO%MUImF2Q3zBsGD1o;;zU3(YO?X*TI};otro)E*^CxC#J={1-_KhG?mf*
zI?@F$WxdAI-n9P17N%sLw=yL?z9-)|a^ky&|4DXw#k#ZBUs#SZnR*NJ#mp
zViYwadP0wyN*9AL+>S~5ANB3QTol}0W!(a2(fo98+TxZiAvNI%oz=ILI*J?+(R
zg^d6^mE-X#>^&yjQ9fH$*1XiQ{61QDqq)ac&0jMB>$|V_Ea7{ms=(h0fJdTJ8UxaO
zUh1j%=vp1ip;r=Y#=TSZkRJtVf1rab!5Glmy1IAj1@U|vY>9&Sl`R8=E=+r7hM+#8
z)jL3&XbZMs;5m@jAm;%2NNC>J;d7m+cZHY(Pl!3NTFe39ru}HPd;#!&ap9U|bmh1;WL?7tmliG8r|@KDD>%z@>Y
z11s7u;moQ{=t%9`1#1)RU0gMG;D4#gHW
z+ihC}Y}6OCk7Bpz^HwpR_K5Tr(|jUaByjvQwplly5-^{3>*iAe=2H^p)3H6s7u(jp
z0`sXD^NHrPg3GL-@jaCJv*mu=(|r0VL3#cl^q*qPr%W-Q$i^=~
zeT3`#Kyx&o0JDY8XPM}nIaajSYjnPt^V~pmp5Ryxnp@FcF6dlIuWR>+I*7)J*6rxL
zutSkADve$${JbiSP7rA`y}L1wdqXzBCuQl+0rW?mvwJZA$nPHNoW;0eF0N>QSM)_m
zuWP6MWmE^rNs^a#vNG!@qpZC6Cq=H+h_Svvq|Guj{U@a_twH*-B+R~m=R=;$z9?zg
zmmM9-9>R&xEt)`IMW7wU=Au2B8==FmhyQ0;OA%Lr>V)emwTU
z5iQpXIj8a&wcjeAX@w0dbcK8VAA9cu9@TZ-i*6l_V8Dtqz}T6%YBB?ZknOQEwv|+M
z)!r${F0Dx;6P(hP=BZdt=__qZh)ubnGoA}6v&afnO$XrC$xOcGUtv22r;xKo>?
zzWzuaZAr^bNuoGTOOX`fTR=zyZte!XB7^e@xuF7&OqR;I`^|t_OTK0KV`SfFGQKui+EOoIL671o}K~!PoGJfv@2a1K)AL
zSIco1d`-K*|0Bo%tPcjhMz(q3d(s16#bZJ6{dko8G8Oo;JmJeZ@uIH#pOBUFl?!K=}F?elo9n3jCQdaN@~bVyoXW1m53
zpPwGcuwK~ySgu|ylI5ytyGPfVFXD^27k0O;>??fE^O2)3we=o*X(QI-+9das^AMMn
zafb3Ce7MFpJHz=FXE49o*_%Iy_2WF&rh0sDb@t_70iB-0e(yW@oW}Pv_?*Qj>|oD~
zPZS@4axrIMD30$*{9TSu6TYYLN#oOo&l-H%@ma@rkS#nvr_MW$HIh0nzK>+FXKKK{
zVRh{=&e``$yZ9Qc(HtY!bIQI3>_J(63~TWkl*988a}H15hpl-9%X7~f+Sf6{^9+H#
z>ekdBB#)skd>~h}U@yzIA}g*>Zblnh(8pHnWnaO%A90jE@J`4?wo{MaiGId;u1kIg
zay7a8xPv_<_Keph!_IK7(6a<**B=M=YWvRU^<}x1r>C?WenamsUeWJg!|!(NC2MP5
zSj@dA`=}9XvG-i*d{yo%-V4Wrb%OnoF20SJT>hopV?FLc85hMfmCtLLxBHP-eaFLj
zeog!Kaz2E4%yI8CTHr%^YfVSA6*%Eo*cW3nmO*dh_lN@>OODaKzc_{c1#o57DpBva
z_ZLs;`iHT<*n)DaA$Pad?km}{H$&!feAI`8cpvtU|IqlS#RBx$+E(JLzIJ%UHsC8m
zd;wP-z*Q%3wF$V2V@zxGSPiaLtFUHV>{+x~{=h66HD9BN|t6{r-@~)tAMG
zrJaC%7+meOaK-*uzZzT}+52kJZ#-UHk>=M~xS|f50j^#W9nm)6ieq7)#MNKwajyYf
zGIES=T#W!%;Fl7vR(o)DSl3?<`&1{&tpTn!S-9FBh^vY53g>6Rp5yBk_8c#tSNe<1
zPqgWfpVn%d&gPGBKl@&3)A@0%i{wX^C+`ht-rJ&iZ_mfndT96?amrFZqD(lgdH#CH
z1loKkGl(-IGx)vuqZh8H9EcS+N*jUULA96d5{sd~z;CtD@W@wwqd{&gso&c<6>LMqMFxI{N}tUn_tEp>8C>8{{=kf_XAj0jT~Y<^3j)2j^EEz
zyt_@ezs7+-x#X3-Nt-`-29xM7$7{U_w3O*Vd{$4!Nn(wH9FqK@bo4l4pt#nw^+sw3
z(cTc-%tvb>f1JJe3=Um~@5`Ni_zVukP^QUwBA0UZ=hDunbIlIo`)Uv7TAZhHO|@gW
z2xJ>zn~&EH{uhe>%^3MQit>y?sNFUszj*^34u%1;94m;*8~+U@LD%
zom$vEDC^t7<1OIh)y~to(7w)*Hs{$~E81NHT(zNo24!gnhs?nDefd_%SLy(X_<(4~
z`)1hUImZ^*;@LO-5~gAw$2qVruk6i`OR^o}!OxzKex)ydE6TB7>I$mg_|*EvnYr@Q
z=#XbKX{;G+C#c^P-v9NF#uvBMLdS8PQ;kdh{|ik^5zq^HoIX$X+UKR0KZYzd^s)};
zESvS(XVZ%Sy&MPaQRhIL`4d<_PSxziXK?6$;`?`M_Te))bP{Dw*F2FsQ?oyJw&v5h
zbI|)vIR|s+pRW@
z{3+*2=y-^qbDqkdLmfjeQP9hI%&{KyB0w);(2D@QL_jZL;3TB!1^NKK@5>9&3-v_O
zOBC6B$9ZD@NOe|!1EoYQ{eM?&rqzD8LOc@)uokS_iDIIquqrqT!2
z1=&YCVE3FdGnn8TT14rhWn
zoC)S|CYZySU=C-3Ih+aRa3&~yCJxk=W2#b+(O#L)^&x{j!!a%U&78J^KaI`bjO`1$
z49_c^|5R(tFwX9FWB%)Fha|1>jE-%5_dgV_j|0y8wQR8Mgi<#2l-Cj&``aIPj$_|P
z*~>ba{zb5JXC}~xIj`fms-=&o^f^9l6bB$%_F&B!0Kbl*pF{Y~^T+Lg#V}}iKc4T#
zo=mpgr0{tXGt<~9gh%Rbs$@;GzK9fhpKxhm#&5c7Qs
z;~0Z&|7qAgpUdsWe2<{qAqQtLC_8|C>tV=Cj+gV@16lcE%~1X*+k%`NhMYVEIk_Kl
zatt!#r2wbAgE1&!LIPy7Z$`5Y|&)PW|-VDC##dC`pYpQsC*s_ZZz%KH%jLSM#{K{L{
z?-*dszwi8ytdf(KU4+j$zFRe4y_+%!{37Q98AN_1{gN(B*=sew@(gYbWSv
z8~WLf-}G(38GCLM_+m4jZ}9Nz0LqbHyEVV|Yd+gl;=k(nb#tk_;@7RZ9nQs%PTIk*
z{ebTll}y58c~4d98*7(d{Z$DBT~1^kLRVs3-5J@2FaIlm3x`@yg9&%pc;YX^Sa
z41VnazxIM(H(i}>L8p#7j5@tPUAHE>?y*CdyN`CFWNc`H3_A2w4;@7P1mp&kp#xIax
zC&Qb;*Hriwejk!wA37JzufKCn@s*cfsWXs%ZGK&<^=bOByJMQ4!a2!7eV(CweVQxe
z58)ZOoj=6(|1-AveoMAcKk~*i82`cZSbOLv*rDSscm~ecEaGH0{QY>=fpZqbQ7Cdann*hUSk$CUJf&p8-tye4m!njQugWLso0Q(J}aq9>5vP2>eF(#d?14ndjp<{eA$n5SMUO_=sEiItX*d
zW!d=h=4V-@^K}d(Uq?H}*NyQh7_9@m;5mf)gcadL*eL$U$UGbC^jy9S`+H=?A{kF%
za8KST=g3g!tQ{!d3x5)>=LRMwPeG68IsJGD{{(o_m8YOdJ>MoK&yleeW#o9#2HUEX
zBjf)p;SRd9{_geXV2+Fq{qC@PU+aLsCB1;hbKvnDcpSde2jNRyj@vtA;`X5DdFyk0
zgo$^rDf18MIH)$nL1hpF)ruIX7Q{d`Ll;OR1}X*Jps5D=xoV96sEUE&`PfJbTBxLd#%tOhdOeqhL^y>oe(I&gD
zB#YusZmU}svA&W;<+_wbO`zkZMl#6Vyob%`@33oHjuM}U9Yql2;l!brdY18C9GHrTu9zmwbzW%ZJ
zJo2eH$zJfXlA#KIwhSe$C>dI&70OVXR(?a%%7=B%BjS6H2Oo#g=OMtHxEjNE;*GdO
z{w>Ih!BPzNe*7NBZ}OayyMxdZVCTRy;+u3rnxQN%pbsUF<1(J^h_1)j!=sR=$B;MT
zPxRTSf%m6$o+AUBPwG5Kbz)fNnlZ4+dGa2Shukt<{XhDf4e=<+o#)ZFZ9QtcF#$jQPF9o@wU6-tDb>={50r(4DnS@!8ht4bl4}M
z%PRQnqMbG%2i~vg`2ITS!%#~-SK{>7aXv|1>ZGNQQH~KelmQzoeYh8}fE-qQhBBl#
z`dzSWv(|-Iqa7tPs0$mp!90>`{aISAYMOrp%
z&^G+WJR7sng|h=n7ha8gSk#3_Ambosp$Ajfdom9@S7}a^Gi#v7P!4ncH;E6*+yX`)
zz8}`CmGI`-?LFGZqZ?<^%zb<6|4=zdx9QxskL!A4z_a1UR?s|U
z6UVSftd(Q%>zBIoU+DUN@3~%>HE0|BHn`{ExQJu3-vOMG#8FDVE8~c{?raE<0C8+F{L{bo`kD}Dd1>iHDS3%&qe9Z$Ss$}nK{NmTJe*!>rp=I(UFau-R^ll
znVe;wvB~7@rYXqT_m}WoS$AHi%bj=0*|<}VKYRvj-_y{U$FSZpPUIl$5>LV|q0c}p
z{n^_F>4LQ4r$2Aed98bqJG&=@JOZI-^Vtyc2!xPFAcQ;uA>mTyeKQu}G4}0oE7t{SK*dLyx{s(k@$_4gc@YF9%QvVkL1J-ieY1W~y4()`
zfEVQ$<5FHhmy}~P?~clM9rE2#{q8V)95+~ddaj4WCux!~t%so_aU2DdV>^5o!}`dv
z2GKNqUs&Jo{d&;+O8wre&uzgwlV^_fdJMAW7{<*0kLmH6=f|zN9Jg?>$<`5?I^TaDE(D6X0U#
z->>h#2OW_1?-AFvHly#9?z{KS!N#=JcS`py*KGDZNxvB+@jyhG4yAB=fXJ_
zFlNGlFyU{O=U=z(k&$!foiPctW!kjyLA$J}%jf^fvcMUi{imm%E$b5+uI!V~zvfeq
z`0WHv4UXHitY5A1dkQ$YM?_0FK6>ZF635I}&i*^6h~s<2_0Tik(t6^~e{1XnzaGEy
z`NmGji4EY_4%j9+X2NTO=Gmsub1OPDP4HY@@-1Ygfk)Do!Kh35Lm9yLJagsmty&(j
zADq3G`q-@d=mfp7k80XyyTs`@=$iFx+@1!k$O}Dp-i6$|Y98$tFSJ;;FvAOkyG`4@
zckZo7+waGj6zR4eK4{}O4@PWwa#6tED*RJ>lU>>ei{mDJ+5BSGlTOeDX@vAjeA;v3
zT=|^u`TUHQ4J;>R1LVRwO*88?&9IJnw?V(_(C^rekr#%RIxQGYMP8f@kQZ#%hG|0c
z0sq?bH~XKrLtcF;_Uyo_=&twX$@FcJEr#`S?3t_
zL#zQknrC`7&(!OAk;W5{HEge6&x`$aLVx97B`33TJ@2Sh{sX(9<7}uM9HQPM-)kEI
z%e&;amaoLkSk2SWC6o*u0d5XR+ziP2Bfw1$Vh_lx2eclX1@E#w`p*2ae%1{iZ^XdC
z{^^bx+C!Oc_DE?Vvhexh+F|lXm^QaVhiD0LACOh!P7eg=De8HD?UO4c{GiJbhPyNj
zcWW3%d}w1&APnQOuRXQOenJ@TmN3jz1w+956BFxn$Tq2K9+9>JH80#voh$%Oy8tih
zA4&%L+jkE@{{6b9v2N7ixgh<7qIzzm`GlsA0X;YNHvqaKekQ{|$H1|03`?}_-n&kp
zz!)-m3@pd>xfQr7wXbz){^s)~SksB`N@dD*vkvTqvVYhNbFdlaU^C3YW|)J`FbA7q
z4mQIaY=$}53{_q`eea=HA4_O@Humu2S})_?fc+Vt?E%-=vl2)5=rWw&m+_r#o4NY&
zj5(|Qp03x8_!#7Y0PXKhlHTe6X?W9)0lEQuU;4s#!{40#8LT5?JOD$lU;iey3A!o6
zDkI0l-(FuzwZW-zeztg
zd&piw&^2R_yGP)Mn$Y}AIY?N)0y~JbyXf<;BQ8Xjk4B!~HV*A+6HSg!CGG=aw2z4eUtFhtE4Q&ft6z
zXL7jPi}}xl6UF&4+hE>3j=cxxnY=fQG7^1A*-pC*--XZ)?<~QYeF+Dx+AhR*Y#(Q1
zrFW)Hwk6xt?HT)xX^-bq91F+MhW4V`hV>?~GSL6-9AvJ6wfDTC0Wu*Bx{rcX?sphE_oC9s~?ecvGU>0{y
z;Jk=!n7uj6(B{6*YGa)RH`d3w8roO&S$Dk!bJoSVaH;9?6@Of7A
zuiIu5m+&CGdH;|DyxaD91N%7imOHc^(tQt+qABTmbd&sz1_x7nWzdDJu=JD*fa~ytY&;#W@i|zd$
z;?G$|pzLmW-ooSbP3;g$m9EyBN#&(cp?38E*uNjanq;Wb^>dF%)P=~C78o+
z0f#k+Av=ZK{)lf`Z0OdEo%G5+S(g2VfM@neIl=L#bid3y!+zI8{+0Um@D_dH)Vt1J
zwnMzLO~R3FHh~Vg=5iivt6kHxgpa2!_%Qmm)s8mU)={*{wb!(TyJL{^Z15B8pYoe+
zolmFGKsvZ0?%3h-*jG#
z0>;KVA<xUa{L;mL3#0$6gTUiX9R7#-@ItbJ48KhD1U*w=({Q?btr+fVrfx
z1n|h`hqc~8nYS`*-b+5Gx^{H)RCxZe)9sJ|Iu|9KLD|e6}(Rzby3mrhBG^&o2?a|Ypyfc*&C$l^PFT94v;H@@R6U!Lp3_9*-9L7d$Q
z%y(Su2R%J0&u(|&9q|L%mG8w|ch&4ip0g*=)-c*&T>rQjM|x-73q@#Z5(_2xF{r1Dpp8b*+$V+j<
z8yH)eK9&3mq-XX8nO!Om9h`06tM4;p4pY*!;WzR#*MpJ`0%KD)h~+bDqk?g>Pe4}egPa&f
zUCN4rloiaeo`nv3HOY)1dNOo$jd(A7wK|-2u-UP!q$l|A)gz}H>F$l<2W4!=cF-T+
z&)0b|)H>p?mz~tSL7rn9S#9HrL1xfK$Ti;NBkz8P%16G`nU~yhgYsQ8^(ecT*O2`h
ze@9cVRp&8J1k^i=KB+TMk1}%QVXbzatE^xL0xSUT59YfsBNQYm+hPDnc%WJn+V
zsyOeIKI4y5*Vgx?zjM@e%OxzW^Sb+XmR=sN+*7NI>tPjmPK%aWP-E#Xip2II{Fl
zj)Cnc+zI&Hs&(_<&ny2-8{Ydh40d0uVEruoL^;-1wH?jqrzU>vHJ?2d%TsS?gPc4A
z{xdq@S?JZ=SDE_>S?)ae&BVJ(ofI&NVD2W~HKTp~!+0LT^LohaC_ac4!yP>XLm932
zQb%HYl;^g7@(T7UNlUKVdN1$)LT7C)91--1R?-b_4
zJdyZ58pS)}_KP?tX%@qT9e(e?T}t;QU&C6L+&zSKR)}TLD~rYJll9OYR9?p?QOoH&
zi?*=NA-)xPt?}<>(I#o`H0ttquhxN2foAwSi{IpzX8oIY4e=h6bhq#fhDE#B|#CaR;08bzX7jp|rKG5{SF?nH5KY#YY{jp6iu#-x9>D08cNw>8L
z^wR)6oOL>|2Vt8E4=R7*di^Ylbtf$0qs}7DJ~;!KLHbIh6_0j;?l>=dpUgH`hVL41
z@6T@Z$((+2F7TaV-+O=)(qsZMg3oDtSM$I*leJGK9Qgd*H2>hA&HpM_;J8!D6}U#_
z)MI&>`yRMt`EstnamVBe47I%~`!lq{d#2bv=?pQOE;_p&bVlEE(j!YGvWg!0o%HCX
zuiq?o%6xk}kZ&)6`MK~8d~XL%0Qkr@#0kA$#|v)YdUBuxWe0QhfTzsOmwsN1LoU@o
z21ih5s}0ANf|MbkogCYjyrSvpcZywdE+NsMBu(&qx<$*T7EK51TkpbNJtS>V*97E;
zviJTxXMI~uCRC3+uWYjg^d<3$`T655UV3iZ6u|c-`-Yd6jr@2`41xcY{9xNge#{pg
z5yaQvY=Zdd0xx$;ejWgRh-V`Q3@%$iV_sZ-RCLL*luJe)ZH7GBjBz3UWyLzucZpWx
z&SsROFF42XA*_LgNG`2n$|OnY*hI;gL&+wVla=r02#^k#k6(u
z)=uOB9e0lDJfP`cRs8xnt!Gr?lU~gy3GfJMlz1h~4Bq|-ZSa}-ZRGGSz?AsQ;XUzb
z#xf>zl8!rH!dN&CyUpL$<1u9!E5>mj^(p&Hx-9AL&!G#_521(W%X+=0yyQ6lU#%w^
zJ&U+wpVZy@HO!-sMT#ciTL79hdsVjKw^xO~gWT(}PS9RepiP%OsEWO+$x#x=S=SNB
zD&9Ax&Z(5VH*?|~>uuexsYA*KmbK~6lz$K~;CGp$MC#cS{rNJ8$y=l0Ve>&!%a|k3
z-Rv+>Os7kK;3lpL!a39mkO0@u?ha{x~mwG6$Q#JdX%F`VPSUtgy$)
zaT8Y@qn~cYzWmlPKDWnY%R_hj{`OuoH`5O1NZR0Uj)#BU=OY;fXN=$Om$HK8dFPv{
zEAMzfIX*v$=ki=0-}$$?ImR9C4_N;w>T_Sf@ANsa+u}IR07qurrVf2VOdXC%?nlr!
z%O$LJhR@HiP3$AszVSilch+P3|L)T#-?1%YGk*A>9KGypxo{eex`LhabtkGpy
z-;HO5H{%;&+75!>F7OQ7;J8ei^snbL)-`?r{7%0M1IMiXPTM2j@y-LDf!TXmuBUv*
zwrt$P1_~J=Bb<&ll!rk#(ktKx@(Xh63@&&^E~us*z90CqrI4Ry#e@y3=BPl&o+GDj?XT9
zkjo4{qP$}VHa0X!-viDi4*$91pqn`K3BtcVz;EqX1zUkTK19(HWs2+hA>@62I8u+h
z)1*xBvqy-ZE}sEyvGkC3?14Boj9aX>p0~laU}$MG&dde=aZW79=YD;}tv(K0YY^+2
zwzh)~_}gxazgyjR01KSYt`Nwz$31dJ%erGPt?N4vTtxAl{s8Bp9}47_<{BZ}!~X4)
z*i*7EoT13Q1ouB||90FD$2fC$ynlz}P3ry7Nn}15#sbpMWHuUU$GC6Pw9VdBJDA37DzE!=r~7#q%aq#f7URlky5
z$af`Nd6i(#Q{^+_q&)x8ID@YjpMPE3^dZCLoNkVQ^BiS6vfJmpbvqOCHt~5obkXsb
z?oXbVHnInL)m+t>`0hB~8JG+4f!-3_^|&X(U5%SCT>yCTywbpH7vRP50Izb))YFM$
z!n)LM(}0{G%kVkNw7KY@;P5=tz?^3ooX3=4p4BjitZ=~`IFPWyUl+_ZF1}QPxf=h?
zk$c6jX_(`@PtJ+9R@&3NIfxak9mu_v$+Pl|<}MBEENq9bH`0HIWo|5f>-xk%1~95^
zZK4hGrLDb)2}d3{b^f1#tO+BJWCYLFKt_-cL&&2Y4&cK#d-xDB6w+s(doOR^2j+Vq
z{+a)QV?D4}u2IrcS|Fca!LxBZ
zdlB;PDD3ZF!si&iAIFDw`7`*?*Yq5oYZ>dT15cCB>cP)kbBapBLBpd5@8O?sI9{)n$3CPqMt`
z*H?5qlm*^%T$bmYIj41;BV-YDaj6UNJVecHUyiwq5|FiW-tfJ^ys_@k&U1hACfuvb
za%W&O!E?ejCqlK!pMkC>LW_KJY{cKSH~9&@ANJFOTHd)g`7!;TxsaAMtOY
zf=p_)*HrxG7+#;G%@3E}tGh{QZ#6k{yt4BP0sX&({XS))U;m%i_K>rH0c}L*p!5Fg
zy-w@fA70no{;od&EEqdk+uHH2_a!&K>-S`in$1$@j5;s8Jt88&*{Vf#xX*YMjJL=u
zB=06|!yU)C(-_a7(D~+UqaBd*=RV4HU#C7-;d9>CNg7Z%EJ%2r#yWh|B?HT}jVY%n{Ie6!e*p`wQX9)lsL<
z8H~aEdBDCE{1N+r
zxpA%{v@wNzC~^@Dt
zA1scCeZioVjEap?cVOH1q7TF!;A|QDSMaH-&$AfT9~3_?I-;ygoKlX-abx^Rz=Grc
zUh!UE9E<}8do&KbcEL^H>rV!}+l_a9`du?$=(ijhMqu>ZxhrK)B
z1>FlFVXX1@@nZRGb`AD2
^1!{zyr8Z-Ug4Xi|HWeRztC64{ti~Pjikr{dWc=fMzj0@*{pz?Sk4+u^P5JD7`Fy}<
zAM4jght)@ua*E`k=^G&K%Yk}SljcW%qC56z6G?O+vXmF{r)uG|KMZp_g;Fc|CzGQ
zqurXPJ;t_xvG2FWXqRuX)*JJjII+i;M!)eLYHV-rQh%?1Q2qU%$mbSRuMPF?c~rgM
zhy0pBUeDu!Bh(d&bz<
z2eO|0Fl8R1U5T=bI57S5uiMt0l{}X~JLOn_j?yzXAC}wZb9v67_p`6i#)vjZ@8|S)
zzGr{t-$a|@)7XpR?l19FK8ZY#apWg#fX<0HggouCv@wRnr+dR9hYO^i=&cuhy>anq
zZ&K9vHHn3NQL(=_C2sDEh+Vx=@yXthcnoFt^~XiBFDaIx%+kJ;SkhN7_Tt@w(TJEo
z8WrKuu$VVkFY5639r%0eXk3I4Lr{yqx8QGT6k-$c2e;$zO{mk1I`14!iEBpFVqRYw
z?Y4+Iw0p;B3)-j7oWhxK8uj7Ii!zPUx4K0vM_rtsp{z@1#(vAhWO#L8K982mOT0Cp
z%}V8|yKoH1xh4B;7GjX)v7Wkh^=##S6Og@1S0B**E)DG0*mPrJ>1eZv52lgV4|qhr
z#ix+_@?Z{lL=I*}r!6{XsoR!ZJ>J7&fByo}*Iy63M}YSv@E#K@fZLma|6TC0S9p(#
zM&LQw7Xf~fz)KVGzCi5lC*H#%JQxG+iSGrXZZIn5jV=(k0{0=zv39f`c#i<@Nsad;
z@E!%;W59b8@SXzRQ)o90+%y62ygPv~Zvx&^sGmkX+|3~2pVoM{@Tl~IN$?)?;2r1)
zguleE#yjbQ{C%wEy(-Y41UL*1NjvAy$SrD^zeH`
zd}y+Mdse(&be6-9GyVK(2X_6X%Qt5~0)MS8nOjqc9sM=9uQ(C=7UsDz
zi~pow;~T}D@MXO#eGYXCu?5nGS}d-{d>F^v)gOx0))b5PCzG*_@P%~O4U6$w_Q!sh
z=LI$#na^Und^D#&SLUPHu0Kx~?{o6AGTvvizEq7I>V0@-k^+%sp-zd*BxAfm_f=276%cZDHrY-#bP#dJo(TTSyDqOJTp=f;}*7C3wejO>#e)
zfqjJf=}hl~TW4S&Z0sV`om(%NJ*-R@i$pJIVdWhcG8rlixy26mc64Y`v}J8
zvX79CNng~#{Oqd6PD(op?IRb9F2Hkv{TjNU4P_?iVoqKhhTYjs7gM#HT`an&WH&Q(
zF%@|+RXeJH4OQ7sX*ci1naD1jn@~>JYX@b-$E2KyOF6+am8Gzs?}c4`ai5VF0&9q}
zrE(piypZ;CDKFI7iINqI#Lcj+Q&wC9dD9GgI_1T|yje$Rx93{b0{O$Wq!scb4LN47
zC#}|cLYqC;DXuFGSfg4XL%6QAf``-Srxh|JhBb@)+yI*{`QMTYRmqX0hh9=u$`Q&N
z_M5p#>kDknfpWyySt)BWRoGdp(bJCg%#1$7elAQOQs;kj``F6s+HBv)PE~%*odc=!
z+6!IJX5%?fHS($c(V3A??8nHbYQ}YLhW5AY$Bb)w`^Y)JiE8!(v)K+bhj+xNP2x9U
z`B_{w{xa3bp*=fhMn1A1BZsOP*NN_#85jGR%DB$%oSAX4A2TlUFVB#&JGyc_*P(5K
z&rWEcm?V5+Y@auoSG^}I_bnluD^Y&Ye~-^m7JD1xSH=5mKdkVA4tBd{PVPIYZNHuYd3ThMdrH-I
z0&`@$A?&?W-${9=Q3Lue-+5j39pSy#esf~qd`@4%slcFWU35Qu5C;ZCVs`;&l96Ph
zM~JtH#mSo^OPuR)KFI#*2gbkL3nt)v!|}idD{WKd=fS>rH%Gc;o#ki`{-20DxUKQk
z;_cUGvJue$pL_bF5=LqGg%h{zhd46uh*~+p!tghi@IZ`32Yg{o>~g~L{R@Iq>XCAD*a3c6
zCHWzS??L>K^zZ|1QYp<3teju$I8ujvz;RoY`>g9*cR&0Qm
zA1Eg-HGR*?etSKf9rV4SYbWB-9LRh{&-sL-p|a*K;*x%_J(tI;XSw`!hP0a_~mNSiMLg-1ctc_*IZtMdb%_1gN~<+fMP$;6qLi9tRyhIjba@Qpgt
z_dml_JTv>x_1D?C1wD43IbF`CzpH0&f2r*?bNabWH#2iKJ%>}b-o&;1O1Hty;dBnC
zQ}qd*lf84YcWxhT(}2?3rn--wlf84YcTV}8O@cP?%Q
z`y&-gG&492va1R4f{X<~j>8clUY0R4AuE1{F*A&xVXRCX-w`h}um~U8*J9ExJrGUF
zcjed_ot5IJ2wDB&`z!xZX1f*w=a<$K%Du2%!?9+{~PX|>$B}Ugfo5H!eWtpjy&Bm
z7CWK)<9js@Bdg-_-d*kq{Cw;-+;K5b4HB)8Q$4inzNciSU7!zWdd_lb5r?o&K89=HWG)}hH-lV0
z2CgTqJ1BX7W~F&9z}+GAk-Ubvf_er7?~J`isQaVu!5y-jQtJNLGw)V+w0;41w6dR&
zSk@N-%!TMg4&cAaHRApi+^w4`=aZTo#$nxeRQC;AU@u`XAHO*Uw#)k~y=|&7*5R({
z`K<$be24B;7ahC7~c$9iNV
z?xAWEH*+pMShs|8s}R4?cL(Gj$~5Ahs785DREGCNEl~GF;qInH`Vq`{(tDyB3$d`g
zCn^#Dd55|8ip6{7eNmyz~W9+L;om~BGs543`Uu*=`aQZ(bnOD%dga_8kAF}c!X?Vx
zNzw%KRPJ4e!r04<1jsyXh`ycC8W@)av*+?71H;Z^E)K@1~NnRGB4Y}i1CVyDmD0!Z8sv5au)}X_`q2_X2)AhF|<=0?c
zi!h2o4s3-yIIY+A9lBl05$c(?9GR$lUmke8+gg*)K6mlp!FCNi3I)~fWbkn3b)9%m
z`?`-}ABR0|VmET7a=kNqI?xK&-e16;i0eG#bBJZi+5H>Ru~Zv1qM>FUFz9nywZA3pWpg7sn6
zG5Rpe1?j_A3Js-1C+;Rv{2jSS{7v2^-&f-A*cACYbIac3Cas%&ZW8{s%O3LKZ?_KT
zUW+c~UPa!GI754tY?XTz(y!U8Z237wzuvtH$9*At74EalUWIoNU+7*%&TR^N70%J@
zRZgr{a}3(6xM`tm12T5Gx5$0z0QcB^q-WZm_%{MS6YkAuo1-oUyI+f0ql6en&TEs~
zU6td$jrG1)?22;V`|n!+viDdVH}we8@BW`xD%f=&BR_q;D07*Y`VQWIm*-)0
zO+VWv`!o9CfnQer&8B?`$7l8>OXvbHyW7?^A5}ZrY^a
zYxXJMxPW~M+ckQ5>xWgllj-Gt`b8zX%;lDKzlb>+S=V+(&2g%>i_0PFIBp~B;{T?`
zJzk}(^T$!}e%0=4>%XhMzBtK#)vu4~>Y41{PtRQQpTT-2>li(g<%0A~w=B7Ix{SOx
zY?kRVgptu@+QSM)UR{RczLIqr&e8C9>q0fhAYEoAXtw{FSCiLDKAm`0VACwik!Ihd
z`OMZ;W@4P7>#E}|mb=`%CGWnQ#k@80WB$CwabGRG75nka&s(Q4hH7|=<$`$2y&jbJ
z0hfD??@g&OSG%9_lM$Q)^-jO@f65-CPi#o5zNWLrUn#oY@H#ceAYIQNe!=I9vq_IV
zqW1tnbDOiRT^!pQd4{%g+8Z`#UzQ--8q3mlZkP45t@+X2#nzh-KUm$`&T^M~ZI|~+
z&d}Okt-k_sY4qlf|69Sxt2cAp7qYg~24HM}m&0Fyb2NJM+3jkML3;CKYi7)21FWyo
z1{e|pfp)-u*LJ{v^RWYR>}D-H`G;!k_PY5sj~#G)QX5II+`rs-dXI&t^WAE$L2`e#
z;fZ56cna-QW4CP>Q^!-Veq;9)`Kan^I{M9YWik8rvtgWhG+1Y49b>~_xgeeOVy+oM
zeI>6MoTFJY#z)i~gVv0hpxN!u
zR>xZ`cR6`0bGSO*V!6xBTk^i@Sf
zs7;O2tcAwUsvNV(F)ycY$pwwMTDTp*TaCx8iPHZihS){K+rS4Ov5WL`c^G5asNr|l
zMUA7hak$|