Overclock.net › Forums › Software, Programming and Coding › Coding and Programming › [Benchmark] .NET4 vs Hand-made collections
New Posts  All Forums:Forum Nav:

[Benchmark] .NET4 vs Hand-made collections

post #1 of 31
Thread Starter 

What is this test all about?

This benchmark measures the performance and scalability of collections in the .NET platform. Collections are objects that hold a number of other objects in an organized manner. I test the Write and Seek(Find) times on several types of collections.

Personal note:

I’m so excited! Watch how my code annihilates the .NET with 10x better I/O times!

Which are the participants?

I test consequently Array, List, LinkedList, Dictionary (holds 2x values), SortedDictionary (holds 2x values), SingleDataHub (hand-made, holds 2x values) and MultipleDataHub (hand made, holds 4x values).
The last two collections are written by me from scratch. They support all the features of the integrated collections, like indexes, enumerators, LINQ, serialization and sorting.

A more detailed description:

Array – An ordinary collection with limited size, declared on creation. Every field has a unique index (0,1,2,3…n) and an object value, which may hold anything from simple bytes to complex custom objects.
List – An automated version of the Array, which is of undeclared size (grows automatically as limit is approached). List supports several features more than the array – automated insertion of elements from existent array, removing by index and by value, conversion operators etc.
LinkedList – This is another implementation of the List, with identical features but different construction – where List is built on array principle, where objects are arranged consequently in memory, LinkedList scatters them and creates links between them, where every element is linked to the one before and the one after. LinkedList doesn’t have indexes because objects are spread among free spaces in memory and are not compacted in an identifiable array sequence.
Dictionary – This is a collection similar to the List. Here, entries contain 2 paired objects – a Key and a Value. In addition, entries have indexes.
SortedDictionary – The same collection as Dictionary, except here entries are sorted by the Key, using the default comparer per data type. The sorting is from smallest to largest.
DataHub – This is my implementation of SortedDictionary. Same features, but different sorting algorithm. Supports and makes use of Dual Core CPUs.
MultipleDataHub – My implementation again, of collection that holds indefinite amount of objects per entry. Where SortedDictionary has a Key-Value pair, MultipleDataHub has Key-n*Value pairs.

Background

Exactly a year ago I was making a dictionary application for a school project. After reading quite a bit opinions in developer forums, I noticed how everyone complains of the performance of the SortedDictionary collection – Sorting was slow, resulting in extended Write times, Read times were so bad they blocked the execution of the app for several seconds.
So I started making a custom collection to use in my project. As things progressed, the collection itself became my major project. I developed sever different collections with different functionality and build everything in a single assembly. It is called DataHub and is available via my website, you can check it out if you wish. In general, I improved on several different .NET predefined classes, while keeping the same features.
For the last year I have used the assembly in 4 following projects and improved it to near perfection.
I have posted previously about this assembly on several forums, including this one, receiving none-to-minimal attention. I hope the result of this bench will attract more eyes.

Test method

In this benchmark I load the collections with different amounts of string values, measuring the time it takes to construct the collection (allocate memory), to insert all the values (Write time) and locate a single value (Find time).
The value I seek for is always the value 9/10 of the total amount (if I load with 1000 objects, I will seek the 900-th).
This gives a realistic, near-worst case scenario.
Every String object is the String representation of the index of the element to be inserted. Thus, the first object has a value of “0â€, the 760-th object has a value of “760â€, etc.
In the end of the bench I have posted the exact code that performs the benchmark.
Times to write the log or display results do not affect the results themselves, of course.

Some collections, due to their nature, hold n-times the load amount.
For example the Dictionary, SortedDictionary and SingleDataHub hold 2x (2000 actual values in the 1000 values tests – 1000 key/value pairs.)
MultipleDataHub is loaded with 4x values – 1000 keys + 3x1000 values In the 1000 values test.

Due to the sorted nature of some collections, Binary Search can be (and is) implemented, along with ordinary parsing. BunarySearch is supposed to drastically reduce Seek times. SortedDictionary is benchmarked using both methods, just to show the difference. Other sorted collections are benchmarked only with BinarySearch.

Seek in MultipleDataHub is omitted, because it is 100% identical to the seek in SingleDataHub and does not depend on the values. The charts for MDH contain the values for SDH.

Since the entry “SortedDictionary (BINARY SEARCH)†reflects only a different search algorithm for the same collection, the Initialize and Write values in the graphs are the same as in SingleDataHub (because it is the same object, actually)

Performance is measured in milliseconds.

Test machine

Tests have been performed on 4 different configurations in the past year, with identical/confirming results. The results in this bench are from my sig rig:

Athlon II 260 @stock
4GB DDR3 1333 CL7
500Gb Samsung 7200RPM, 16Mb cache
Win7 x64
.NET4.0
VisualStudio 2010 Ultimate

Layout

For every load level, I will post the log of the test itself, along with a graph representing the performance of every collection and then comments.


BENCHMARK:

Load with 1000 strings

Log:
STRING TEST /0
Array of 1000 constructed in 0
List constructed in 0
Linked List constructed in 0
Dictionary constructed in 0
Sorted Dictionary constructed in 0
DataHub constructed in 0
Multiple DataHub constructed in 0

Writing to Array..
Writing 1000 entries to Array took 0
Writing to List..
Writing 1000 entries to List took 0
Writing to Linked List..
Writing 1000 entries to Linked List took 0
Writing to Dictionary..
Writing 1000 key/value pairs to Dictionary took 0
Writing to Sorted Dictionary..
Writing 1000 key/value pairs to Sorted Dictionary took 10
Writing to DataHub..
Writing 1000 key/value pairs to DataHub took 0
Writing to Multiple DataHub..
Writing 1000 x4 key/value pairs to Multiple DataHub took 0

Locating 900 in Array..
Parsing Array took 0
Locating 900 in List..
Parsing List took 10
Locating 900 in Linked List..
Parsing Linked List took 0
Locating 900 in Dictionary..
Parsing Dictionary took 0
Locating 900 in Sorted Dictionary..
Parsing Sorted Dictionary took 0
Locating 900 in Sorted Dictionary (Binary Search)..
Parsing Sorted Dictionary (Binary Search) took 0
Locating 900 in DataHub (Binary Search)..
Parsing DataHub (Binary Search) took 0
Done. Press Enter..



As you can see, things go pretty fast with just 1000 strings to handle – all collections initialize instantly (0 milliseconds).
All collections (except SortedDictionary) write instantly (0 ms) against 10 ms by SortedDict.
All collections seek instantly (0 ms), except List – 10 ms.

On this level all collections perform identical – 10ms reads are often an error and do not mean an actual slow-down. Performance differences are not exposed until more serious loads.


Load with 10k strings

Log:
STRING TEST /0
Array of 10000 constructed in 0
List constructed in 0
Linked List constructed in 0
Dictionary constructed in 0
Sorted Dictionary constructed in 0
DataHub constructed in 10
Multiple DataHub constructed in 0

Writing to Array..
Writing 10000 entries to Array took 0
Writing to List..
Writing 10000 entries to List took 0
Writing to Linked List..
Writing 10000 entries to Linked List took 0
Writing to Dictionary..
Writing 10000 key/value pairs to Dictionary took 10
Writing to Sorted Dictionary..
Writing 10000 key/value pairs to Sorted Dictionary took 30.0001
Writing to DataHub..
Writing 10000 key/value pairs to DataHub took 20
Writing to Multiple DataHub..
Writing 10000 x4 key/value pairs to Multiple DataHub took 20

Locating 9000 in Array..
Parsing Array took 0
Locating 9000 in List..
Parsing List took 0
Locating 9000 in Linked List..
Parsing Linked List took 0
Locating 9000 in Dictionary..
Parsing Dictionary took 10
Locating 9000 in Sorted Dictionary..
Parsing Sorted Dictionary took 10
Locating 9000 in Sorted Dictionary (Binary Search)..
Parsing Sorted Dictionary (Binary Search) took 0
Locating 9000 in DataHub (Binary Search)..
Parsing DataHub (Binary Search) took 0
Done. Press Enter..



In the 10k objects test, we observe a difference in performance. The initialization is again instant, 10ms are an incorrect measure 99% of the time, and in the rest 1% they don’t affect execution by any means.

The Write times for Array, List and LinkedList are again 0ms, meaning those connections can handle 10 times more values without losing performance by any means.
The Write value for Dictionary is 10ms, which is either an error (as most 10ms readings are) or a normal result of the fact that Dictionary contains 2x test objects (20k in this test). Either way, 10ms do not affect execution.

The Write values for SortedDictionary jumped to 30ms from 10ms-0ms in the previous test. This is 3x performance impact for 10x increase in values.
The SingleDataHub also shows a rise, but with the better result of 20ms – 2x impact for 10x increase.

The MultipleDataHub shows the same result as the SingleDataHub, but for 2x more values (it by default contains 4x the test amount, where SDH and SortedDict contain 2x the test amount). This gives 2x impact for 10x increase in values, but regardless of 2x larger amount.

Load with 100k strings

Log:
STRING TEST /0
Array of 100000 constructed in 0
List constructed in 0
Linked List constructed in 0
Dictionary constructed in 0
Sorted Dictionary constructed in 0
DataHub constructed in 0
Multiple DataHub constructed in 0

Writing to Array..
Writing 100000 entries to Array took 20
Writing to List..
Writing 100000 entries to List took 10
Writing to Linked List..
Writing 100000 entries to Linked List took 20.0001
Writing to Dictionary..
Writing 100000 key/value pairs to Dictionary took 50
Writing to Sorted Dictionary..
Writing 100000 key/value pairs to Sorted Dictionary took 430.0006
Writing to DataHub..
Writing 100000 key/value pairs to DataHub took 40.0001
Writing to Multiple DataHub..
Writing 100000 x4 key/value pairs to Multiple DataHub took 250.0003

Locating 90000 in Array..
Parsing Array took 60.0001
Locating 90000 in List..
Parsing List took 80.0001
Locating 90000 in Linked List..
Parsing Linked List took 10
Locating 90000 in Dictionary..
Parsing Dictionary took 60.0001
Locating 90000 in Sorted Dictionary..
Parsing Sorted Dictionary took 80.0002
Locating 90000 in Sorted Dictionary (Binary Search)..
Parsing Sorted Dictionary (Binary Search) took 10
Locating 90000 in DataHub (Binary Search)..
Parsing DataHub (Binary Search) took 0
Done. Press Enter..



Now here we get some data to compare.
Initialization is still below the detection threshold for all collections (proving that the 10ms reading in the prev. test was indeed an error).

The Write times, however, are heavily affected by the 10x values increase!

Array, List and LinkedLsit no longer write in an instant- they take avg. 15ms to write 100k entries (which is still very scalable, 2x performance impact for 100x increase in values)

Dictionary is more affected by the raised amount (to remind, Dictionary, SortedDict and SDH write 2x the test amount)
Dictionary now writes for 50ms, which is 5x performance impact for 10x increased load.
SortedDict, however, shows terrible performance for the same load.
Write time is the amazing 430ms, which is 15x performance impact for 10s increase in load!
The opponent, SDH, handles the load way better – 40ms, which is just 2x impact for another 10k increase in load.
MultipleDataHub is affected by the quad load and shows 250ms, which is 10.5x impact for 10x increase in values. However, the write time is still 2x faster than SortedDict, for 2x more values!

The Seek times are also very interesting:
Even though Array, List and LinkedList performed identical in Write test, the first two lag behind LinkedList in the Seek test- values are 60ms and 80ms against just 10ms for LinkedList!
This is 6x performance impact for Array and 8x performance impact for List after 10x increase in load, against no impact for LinkedList.

Among the double-sized collections, there are other interesting results:
Dictionary and SortedDictionary took the same amount of time as Array and List for an ordinary parse – 60 and 80ms (again, 6x and 8x impact for 10x increase).
The improved BinarySearch, however, gave an advantage for the SortedDictionary – just 10ms (no impact). The Dictionary collection does not support BinarySearch, so 60ms is the final value there.
SHD and MDH still outperform everything else with 0ms seek time for BinarySearch.

Load with 1M strings

Log:
STRING TEST /0
Array of 1000000 constructed in 0
List constructed in 0
Linked List constructed in 0
Dictionary constructed in 0
Sorted Dictionary constructed in 0
DataHub constructed in 0
Multiple DataHub constructed in 0

Writing to Array..
Writing 1000000 entries to Array took 190.0003
Writing to List..
Writing 1000000 entries to List took 210.0003
Writing to Linked List..
Writing 1000000 entries to Linked List took 370.0005
Writing to Dictionary..
Writing 1000000 key/value pairs to Dictionary took 680.0009
Writing to Sorted Dictionary..
Writing 1000000 key/value pairs to Sorted Dictionary took 5130.0071
Writing to DataHub..
Writing 1000000 key/value pairs to DataHub took 730.001
Writing to Multiple DataHub..
Writing 1000000 x4 key/value pairs to Multiple DataHub took 6550.009

Locating 900000 in Array..
Parsing Array took 620.0009
Locating 900000 in List..
Parsing List took 5470.0075
Locating 900000 in Linked List..
Parsing Linked List took 70.0001
Locating 900000 in Dictionary..
Parsing Dictionary took 650.0009
Locating 900000 in Sorted Dictionary..
Parsing Sorted Dictionary took 800.0011
Locating 900000 in Sorted Dictionary (Binary Search)..
Parsing Sorted Dictionary (Binary Search) took 110.0002
Locating 900000 in DataHub (Binary Search)..
Parsing DataHub (Binary Search) took 0
Done. Press Enter..



Finally, the 1M values test. Results are astonishing:
The Initialization times are still 0ms for all collections, meaning allocation of 1M empty memory spots for the Array is optimized and never affects performance.

The Write values are as follows:
For Array we have 190ms, which is 3.1x performance impact for 10x increase in load.
List gives almost the same value, 210ms, but that shows epic 21x performance impact for 10x increase!
LinkedList now gets 370ms write time and lags a bit compared to the above two. This is 18.5x impact for 10x raise.

In the double-load collections, things get ugly.
The Dictionary shows 680ms write time (more than half sec). This is 13.6x performance impact for 10x increase in load. This is nothing, however, compared to SortedDict:
SortedDict took the amazing 5130ms (5.1 sec) to write the test data. This is 12x performance impact for the 10x increase. Those 5.1 sec severely impact the execution of the application – your app will be (Not Responding) for 5.1 sec.
The direct opponent, SDH, takes only 730ms for the same amount of data. This is 18.25x impact for the last 10x values, but solid 7 times faster than the SortedDict.
MDH, the quad-values collection is affected by the large amount of data and gives 6550ms (6.5 sec) write time, which is 26.5x impact for the last 10x.

The Seek times however, show rather different situation:
Array finds the test value in 620ms, which is solid 10x impact for the 10x increase.
List, which performs similar in the Write test, now fails with nearly 5.5sec (5470ms)! This shows amazing 68x performance impact for 10x raise in load!
LinkedList is still the king of the hill in the Seek test with result of 70ms – 7x impact for 10x raise.
In the 2x collections, Dictionary seeks for 650ms, which is 10.8x impact for 10x increase.
SortedDict does a full parse for 800ms, which is exactly 10x impact for 10x raise.
The BinarySearch option however, brings this time down to more acceptable 110ms. (11x impact)
But even this can’t beat the SDH and MDH, which give the amazing instantaneous seek of 0ms even with 1M entries! (And yes, the entry is actually found).

Load with 10M strings

Unfortunately I run out of RAM during the 1M values test. Meaning a 10M test would give severe advantage to whichever collection is tested first, as it will use the RAM, and all following will use the HDD buffer. This will give us unrealistic results, so I’ll just end the benchmark at 1M values. This is a serious load anyway.

Performance Impact

Let’s measure the impact that increase in values has to performance:
(0ms to 10ms readings are calculated as 1ms)
(values are actual)

The impact is shown as follows:
Collection – low load (times raise in load) times performance impact (scalability)
Collection – total load (times raise in load) times performance impact (scalability)
Collection – high load (times raise in load) times performance impact (scalability)


Write impact:
Array – 1k to 100k (100x raise in load) gives 20x impact on performance (very scalable)
Array – 1k to 1M (1000x raise in load) gives 190x impact on performance (very scalable)
Array – 10k to 1M (100x raise in load) gives 190x impact on performance (unscalable, 1.9x loss)

List – 1k to 100k (100x raise in load) gives 10x impact on performance (very scalable)
List – 1k to 1M (1000x raise in load) gives 210x impact on performance (very scalable)
List – 10k to 1M (100x raise in load) gives 210x impact on performance (unscalable, 2.1x loss)

Linked List – 1k to 100k (100x raise in load) gives 20x impact on performance (very scalable)
Linked List – 1k to 1M (1000x raise in load) gives 370x impact on performance (very scalable)
Linked List – 10k to 1M (100x raise in load) gives 370x impact on performance (unscalable, 3.7x loss)

Dictionary – 2k to 200k (100x raise in load) gives 50x impact on performance (scalable)
Dictionary – 2k to 2M (1000x raise in load) gives 680x impact on performance (scalable)
Dictionary – 20k to 2M (100x raise in load) gives 680x impact on performance (very unscalable, 6.8x loss)

Sorted Dictionary – 2k to 200k (100x raise in load) gives 430x impact on performance (unscalable, 4.3 loss)
Sorted Dictionary – 2k to 2M (1000x raise in load) gives 5130x impact on performance (very unscalable, 5.1x loss)
Sorted Dictionary – 20k to 2M (100x raise in load) gives 171x impact on performance (unscalable, 1.7x loss)

SDH – 2k to 200k (100x raise in load) gives 40x impact on performance (very scalable)
SDH – 2k to 2M (1000x raise in load) gives 730x impact on performance (scalable)
SDH – 20k to 2M (100x raise in load) gives 36.5x impact on performance (very scalable)

MDH – 4k to 400k (100x raise in load) gives 250x impact on performance (unscalable, 2.5x loss)
MDH – 4k to 4M (1000x raise in load) gives 6550x impact on performance (very unscalable, 6.5x loss)
MDH – 40k to 4M (100x raise in load) gives 327.5x impact on performance (unscalable, 3.2x loss)


Read impact:
Array – 1k to 100k (100x raise in load) gives 60x impact on performance (scalable)
Array – 1k to 1M (1000x raise in load) gives 620x impact on performance (scalable)
Array – 10k to 1M (100x raise in load) gives 620x impact on performance (very unscalable, 6.2x loss)

List – 1k to 100k (100x raise in load) gives 80x impact on performance (scalable)
List – 1k to 1M (1000x raise in load) gives 5470x impact on performance (very unscalable, 5.4x loss)
List – 10k to 1M (100x raise in load) gives 5470x impact on performance (utterly unscalable, 54x loss)

Linked List – 1k to 100k (100x raise in load) gives 1x impact on performance (no impact)
Linked List – 1k to 1M (1000x raise in load) gives 70x impact on performance (very scalable)
Linked List – 10k to 1M (100x raise in load) gives 70x impact on performance (scalable)

Dictionary – 2k to 200k (100x raise in load) gives 60x impact on performance (scalable)
Dictionary – 2k to 2M (1000x raise in load) gives 650x impact on performance (scalable)
Dictionary – 20k to 2M (100x raise in load) gives 650x impact on performance (very unscalable, 6.5x loss)

Sorted Dictionary – 2k to 200k (100x raise in load) gives 80x impact on performance (scalable)
Sorted Dictionary – 2k to 2M (1000x raise in load) gives 800x impact on performance (scalable)
Sorted Dictionary – 20k to 2M (100x raise in load) gives 800x impact on performance (very unscalable, 8x loss)

Sorted Dictionary (BINARY SEARCH) – 2k to 200k (100x raise in load) gives 1x impact on performance (no impact)
Sorted Dictionary (BINARY SEARCH) – 2k to 2M (1000x raise in load) gives 110x impact on performance (very scalable)
Sorted Dictionary (BINARY SEARCH) – 20k to 2M (100x raise in load) gives 110x impact on performance (unscalable, 1.1x loss)

SDH – 2k to 200k (100x raise in load) gives 1x impact on performance (no impact)
SDH – 2k to 2M (1000x raise in load) gives 1x impact on performance (no impact)
SDH – 20k to 2M (100x raise in load) gives 1x impact on performance (no impact)

MDH – 4k to 400k (100x raise in load) gives 1x impact on performance (no impact)
MDH – 4k to 4M (1000x raise in load) gives 1x impact on performance (no impact)
MDH – 40k to 4M (100x raise in load) gives 1x impact on performance (no impact)


Overall Performance Charts:
(less is best)

Write:



Read:



Best scorers:

Some really impressive results were shown. I’ll share a few thoughts.
First off, all 3 single-value collections show similar results in Write test, with good scalability @low loads and moderate lack of scalability @high loads (1.9x to 3.7x performance loss).
In the Read test, however, LinkedList practically destroyed the opposition with very good scalability versus unbelievable lack of scalability of 54x performance loss in List!

In the double-value collections, the Dictionary have the advantage of being unsorted, resulting in best performance during Write test – it doesn’t execute a sorting algorithm like the rest of the collections.
However, sorting is a very good function, allowing for blazing fast Reading later. Also, Dictionary’s advantage was not decisive in the Write test – my SingleDataHub (which is sorted) was only 50ms slower in the last test and even less in the previous.
In the Read test, Dictionary loses to the collections with BinarySearch (BS is only supported by sorted collections, like SortedDict and SDH).
There, the undisputed champion is (I’m proud to announce) my own SDH with indefinite scalability – no impact on performance with any of the load levels, and blazing fast search algorithm that finds the desired object in an instant – 0ms.
The direct opposition – SortedDictionary, lagged behind in Read test, being 110x slower with BinarySearch, and lagged behind in Write test too, being 7x slower under heavy load.

In the quad section, where we have only 1 collection – the MDH, results are also interesting.
The MDH is sorted, just like the SortedDict, and holds 2x more data. However it manages to be only 1.27x slower (for 2x more data, very scalable) @high load and even faster at moderate loads -1.72x (for 2x more data, very scalable). MDH shares SDH’s Read result of 0ms.

Worst performers

Under these conditions, avoid using these collections:
Low load: up to 1k entries it doesn’t matter, all collections perform in an instant – 0ms.
Low-moderate load: 10k entries tend to slow down the SortedDictionary more than they slow down the rest. The rest of the collections perform similar.
High-moderate load: with 100k entries worst performer is once again the SotedDict. The Array and List also don’t shine.
High load: at 1M entries SortedDict and List degrade drastically. Avoid them at all cost.


Test code

So you can see there is no foul-play, I’m posting the code that performed the test:

Code:
public static void StringTest(int n)
        {
            DateTime start;
            DateTime end;
            TimeSpan ts;
            start = DateTime.Now;
            strArray = new String[n];
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Array of " + n + " constructed in " + ts.TotalMilliseconds);
            Console.WriteLine("Array of " + n + " constructed in " + ts.TotalMilliseconds);
            start = DateTime.Now;
            strList = new List<string>();
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("List constructed in " + ts.TotalMilliseconds);
            Console.WriteLine("List constructed in " + ts.TotalMilliseconds);
            start = DateTime.Now;
            strLinkedList = new LinkedList<string>();
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Linked List constructed in " + ts.TotalMilliseconds);
            Console.WriteLine("Linked List constructed in " + ts.TotalMilliseconds);
            start = DateTime.Now;
            strDict = new Dictionary<string, string>();
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Dictionary constructed in " + ts.TotalMilliseconds);
            Console.WriteLine("Dictionary constructed in " + ts.TotalMilliseconds);
            start = DateTime.Now;
            strSortedDict = new SortedDictionary<string, string>();
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Sorted Dictionary constructed in " + ts.TotalMilliseconds);
            Console.WriteLine("Sorted Dictionary constructed in " + ts.TotalMilliseconds);
            start = DateTime.Now;
            strSortedDataHub = new SingleDataHub<string, string>();
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("DataHub constructed in " + ts.TotalMilliseconds);
            Console.WriteLine("DataHub constructed in " + ts.TotalMilliseconds);
            start = DateTime.Now;
            strMultipleDataHub = new MultipleDataHub<string>(new Type[] { typeof(string), typeof(string), typeof(string) });
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Multiple DataHub constructed in " + ts.TotalMilliseconds);
            Console.WriteLine("Multiple DataHub constructed in " + ts.TotalMilliseconds);
            outputStream.WriteLine("");
            Console.WriteLine("");
int i;
            
            outputStream.WriteLine("Writing to Array..");
            Console.WriteLine("Writing to Array..");
            i = 0;
            start = DateTime.Now;
            while (i < n)
            {
                strArray[i] = i.ToString();
                i++;
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Writing " + n + " entries to Array took " + ts.TotalMilliseconds);
            Console.WriteLine("Writing " + n + " entries to Array took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Writing to List..");
            Console.WriteLine("Writing to List..");
            i = 0;
            start = DateTime.Now;
            while (i < n)
            {
                strList.Add(i.ToString());
                i++;
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Writing " + n + " entries to List took " + ts.TotalMilliseconds);
            Console.WriteLine("Writing " + n + " entries to List took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Writing to Linked List..");
            Console.WriteLine("Writing to Linked List..");
            i = 0;
            start = DateTime.Now;
            while (i < n)
            {
                strLinkedList.AddFirst(i.ToString());
                i++;
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Writing " + n + " entries to Linked List took " + ts.TotalMilliseconds);
            Console.WriteLine("Writing " + n + " entries to Linked List took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Writing to Dictionary..");
            Console.WriteLine("Writing to Dictionary..");
            i = 0;
            start = DateTime.Now;
            while (i < n)
            {
                strDict.Add(i.ToString(), i.ToString());
                i++;
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Writing " + n + " key/value pairs to Dictionary took " + ts.TotalMilliseconds);
            Console.WriteLine("Writing " + n + " key/value pairs to Dictionary took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Writing to Sorted Dictionary..");
            Console.WriteLine("Writing to Sorted Dictionary..");
            i = 0;
            start = DateTime.Now;
            while (i < n)
            {
                strSortedDict.Add(i.ToString(),i.ToString());
                i++;
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Writing " + n + " key/value pairs to Sorted Dictionary took " + ts.TotalMilliseconds);
            Console.WriteLine("Writing " + n + " key/value pairs to Sorted Dictionary took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Writing to DataHub..");
            Console.WriteLine("Writing to DataHub..");
            i = 0;
            start = DateTime.Now;
            while (i < n)
            {
                strSortedDataHub.Add(i.ToString(), i.ToString());
                i++;
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Writing " + n + " key/value pairs to DataHub took " + ts.TotalMilliseconds);
            Console.WriteLine("Writing " + n + " key/value pairs to DataHub took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Writing to Multiple DataHub..");
            Console.WriteLine("Writing to Multiple DataHub..");
            i = 0;
            start = DateTime.Now;
            while (i < n)
            {
                strMultipleDataHub.Add(i.ToString(), new object[]{i.ToString(), i.ToString(), i.ToString()});
                i++;
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Writing " + n + " x4 key/value pairs to Multiple DataHub took " + ts.TotalMilliseconds);
            Console.WriteLine("Writing " + n + " x4 key/value pairs to Multiple DataHub took " + ts.TotalMilliseconds);
            outputStream.WriteLine("");
            Console.WriteLine("");

//
            outputStream.WriteLine("Locating " + 0.9 * n + " in Array..");
            Console.WriteLine("Locating " + 0.9 * n + " in Array..");
            start = DateTime.Now;
            foreach (string s in strArray)
            {
                if (s.Equals((0.9 * n).ToString()))
                {
                    break;
                }
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Parsing Array took " + ts.TotalMilliseconds);
            Console.WriteLine("Parsing Array took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Locating " + 0.9 * n + " in List..");
            Console.WriteLine("Locating " + 0.9 * n + " in List..");
            start = DateTime.Now;
            foreach (string s in strList)
            {
                if (s.Equals((0.9 * n).ToString()))
                {
                    break;
                }
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Parsing List took " + ts.TotalMilliseconds);
            Console.WriteLine("Parsing List took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Locating " + 0.9 * n + " in Linked List..");
            Console.WriteLine("Locating " + 0.9 * n + " in Linked List..");
            start = DateTime.Now;
            foreach (string s in strLinkedList)
            {
                if (s.Equals((0.9 * n).ToString()))
                {
                    break;
                }
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Parsing Linked List took " + ts.TotalMilliseconds);
            Console.WriteLine("Parsing Linked List took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Locating " + 0.9 * n + " in Dictionary..");
            Console.WriteLine("Locating " + 0.9 * n + " in Dictionary..");
            start = DateTime.Now;
            foreach (KeyValuePair<string,string> p in strDict)
            {
                if (p.Key.Equals((0.9 * n).ToString()))
                {
                    break;
                }
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Parsing Dictionary took " + ts.TotalMilliseconds);
            Console.WriteLine("Parsing Dictionary took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Locating " + 0.9 * n + " in Sorted Dictionary..");
            Console.WriteLine("Locating " + 0.9 * n + " in Sorted Dictionary..");
            start = DateTime.Now;
            foreach (KeyValuePair<string, string> p in strSortedDict)
            {
                if (p.Key.Equals((0.9*n).ToString()))
                {
                    break;
                }
            }
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Parsing Sorted Dictionary took " + ts.TotalMilliseconds);
            Console.WriteLine("Parsing Sorted Dictionary took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Locating " + 0.9 * n + " in Sorted Dictionary (Binary Search)..");
            Console.WriteLine("Locating " + 0.9 * n + " in Sorted Dictionary (Binary Search)..");
            start = DateTime.Now;
            strSortedDict.Keys.ToList().BinarySearch((0.9*n).ToString());
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Parsing Sorted Dictionary (Binary Search) took " + ts.TotalMilliseconds);
            Console.WriteLine("Parsing Sorted Dictionary (Binary Search) took " + ts.TotalMilliseconds);
            outputStream.WriteLine("Locating " + 0.9 * n + " in DataHub (Binary Search)..");
            Console.WriteLine("Locating " + 0.9 * n + " in DataHub (Binary Search)..");
            start = DateTime.Now;
            Console.WriteLine(strSortedDataHub.Find((0.9*n).ToString()).Key);
            end = DateTime.Now;
            ts = end - start;
            outputStream.WriteLine("Parsing DataHub (Binary Search) took " + ts.TotalMilliseconds);
            Console.WriteLine("Parsing DataHub (Binary Search) took " + ts.TotalMilliseconds);
        }
Final words:

Gezus, this is 14 pages in Word! Didn’t expect that much, anyway thank you for bearing with me, must’ve not been easy
I hope I provided some good and detailed info, feel free to share thoughts
Cheers!
Edited by ronnin426850 - 3/28/11 at 1:42am
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
post #2 of 31
It's always good to see things like this. But what's the 30,000 foot view? It's kind of long to examine . I take it that you came up with a type of collection to provide some of the benefits of heavier-weight collection objects in .Net but provide performance closer to lightweight collections?
Main System
(13 items)
 
  
CPUMotherboardGraphicsRAM
Phenom II X6 1090T MSI 890FXA-GD70 XFX Radeon 5850 16 GB Corsair XMS3 DDR3 1333 
Hard DriveOSMonitorPower
3 X 7200 RPM / 1 TB Win 7 Ultimate 64 2 x ASUS 23" VH232H 1080P SeaSonic X-650 
Case
Corsair 600T 
  hide details  
Reply
Main System
(13 items)
 
  
CPUMotherboardGraphicsRAM
Phenom II X6 1090T MSI 890FXA-GD70 XFX Radeon 5850 16 GB Corsair XMS3 DDR3 1333 
Hard DriveOSMonitorPower
3 X 7200 RPM / 1 TB Win 7 Ultimate 64 2 x ASUS 23" VH232H 1080P SeaSonic X-650 
Case
Corsair 600T 
  hide details  
Reply
post #3 of 31
Thread Starter 
EDIT:

Quote:
Originally Posted by tand1 View Post
It's always good to see things like this. But what's the 30,000 foot view? It's kind of long to examine . I take it that you came up with a type of collection to provide some of the benefits of heavier-weight collection objects in .Net but provide performance closer to lightweight collections?
Yep And not only. It's worth reading if you got the 10min
Edited by ronnin426850 - 3/27/11 at 8:33am
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
post #4 of 31
Thread Starter 
So? Any opinions?
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
post #5 of 31
Please please please use the code tags!

A wall of unreadable code there, ugh.

pretty cool. Last year we had to do something similar for data structures. Reversible linked list with some specific performance requirements.

Would up making lists way faster then the java default.
    
CPUMotherboardOSMonitor
2500k P8P67 Windows 7 Ultimate x64 22" phillips 
KeyboardPowerMouseMouse Pad
Cheap Logitech Antec Earthwatts 650W Razer Deathadder Razer eXactMat 
  hide details  
Reply
    
CPUMotherboardOSMonitor
2500k P8P67 Windows 7 Ultimate x64 22" phillips 
KeyboardPowerMouseMouse Pad
Cheap Logitech Antec Earthwatts 650W Razer Deathadder Razer eXactMat 
  hide details  
Reply
post #6 of 31
Thread Starter 
Quote:
Originally Posted by serge2k View Post
Please please please use the code tags!

A wall of unreadable code there, ugh.

pretty cool. Last year we had to do something similar for data structures. Reversible linked list with some specific performance requirements.

Would up making lists way faster then the java default.
Edited. Thank you, I had completely forgotten of the CODE tag!
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
post #7 of 31
Thread Starter 
Just implemented another new collection, an alternative to the ordinary List<T>.
It, once again, supports all the features of the integrated List<T> + being sorted!
Write and Seek times are as follows:
1M values:
integrated List<T> W: 220ms (unsorted) S: 5500ms
my new List <T> W: 220ms (sorted) S: 0ms (instant)

Everything will soon be uploaded to the website and available for dl

EDIT:
With single collections, i managed to perform 10M values test. Here is the log, showing dramatic advantage of my collection over the integrated List and Array.

STRING TEST /0

Writing to Array..
Writing 10000000 entries to Array took 2270.0032
Writing to List..
Writing 10000000 entries to List took 2780.0038
Writing to List PRO..
Writing 10000000 entries to List PRO took 2180.003 (-600ms)
Writing to Linked List..
Writing 10000000 entries to Linked List took 33900.0468

Locating 9000000 in Array..
Parsing Array took 6040.0083
Locating 9000000 in List..
Parsing List took 6180.0086
Locating 9000000 in List PRO..
Parsing List PRO took 0 (-6180ms)
Locating 9000000 in Linked List..
Parsing Linked List took 690.0009
Done. Press Enter..
Edited by ronnin426850 - 3/30/11 at 6:35am
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
post #8 of 31
Couple of comments (Im no expert C#, but program in it professionally):

1) Why are you using datetime as your counter? Why not a true high speed counter like the System.Diagnostics.StopWatch class?

2) WHere is the implementation of your new data type? It might help to understand where this performance is coming from.

3) I think native data types can be made to work fine if you know the performance associated with them. Just like you wouldn't indiscriminantly use a long where you could use an int, using a dictionary where you might be able to create a more applicable hashtable might be the way to go.

I dont think there is a single object you can point to that is optimal for every situation, you have to know about the problem at hand and choose the right solution? Do you need fast insertion? fast retrieval? etc. Just look at List<T>

I am very skeptical of some of these results. I have a hard time believing that there is a routine that can find a value out of a 1M list with 0 response time (that doesn't have a huge drawback somewhere else)

4) Making graphs in excel. Please use titles, axes, units. Its a huge PITA to filter through a mess of charts that don't have any of that.
    
CPUMotherboardGraphicsRAM
Intel Core i7 920 @ 4Ghz Rampage II GENE PNY GTX 680 G.Skill Sniper (12GB) 
Hard DriveOptical DriveCoolingOS
Seagate Momentus XT SATA Optical Drive Kuhler 920 Windows 7 Ultimate 
MonitorKeyboardPowerCase
Dell U2410 Logitech G11  Silverstone ST75 750W Antec Mini P180B 
MouseMouse PadOther
Logitech MX518 Steelpad G19 Gaming Headset 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Intel Core i7 920 @ 4Ghz Rampage II GENE PNY GTX 680 G.Skill Sniper (12GB) 
Hard DriveOptical DriveCoolingOS
Seagate Momentus XT SATA Optical Drive Kuhler 920 Windows 7 Ultimate 
MonitorKeyboardPowerCase
Dell U2410 Logitech G11  Silverstone ST75 750W Antec Mini P180B 
MouseMouse PadOther
Logitech MX518 Steelpad G19 Gaming Headset 
  hide details  
Reply
post #9 of 31
Thread Starter 
Quote:
Originally Posted by killnine View Post
Couple of comments (Im no expert C#, but program in it professionally):

1) Why are you using datetime as your counter? Why not a true high speed counter like the System.Diagnostics.StopWatch class?

2) WHere is the implementation of your new data type? It might help to understand where this performance is coming from.

3) I think native data types can be made to work fine if you know the performance associated with them. Just like you wouldn't indiscriminantly use a long where you could use an int, using a dictionary where you might be able to create a more applicable hashtable might be the way to go.

I dont think there is a single object you can point to that is optimal for every situation, you have to know about the problem at hand and choose the right solution? Do you need fast insertion? fast retrieval? etc. Just look at List<T>

I am very skeptical of some of these results. I have a hard time believing that there is a routine that can find a value out of a 1M list with 0 response time (that doesn't have a huge drawback somewhere else)

4) Making graphs in excel. Please use titles, axes, units. Its a huge PITA to filter through a mess of charts that don't have any of that.
Oh, the result is found for sure It was hard for me to believe it too, but it is a fact- it gets the value and writes it in console, along with the index all correct.

About the "optimal" thing- yes, you are right. However, there are several characteristics which are comparable, and I best the integrated collections in all of them.

The implementation, of course, I won't share for now
We don't know the implementation of Microsoft's collections, so even if I share mine, you still won't know where the difference comes from

EDIT: I'll remake the test with the Stopwatch.
I did that about a year ago when I initially started the development, but the results were identical. Anyway, i'll remake it, ok
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
My Rig
(14 items)
 
Ex-wife's Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
Core i5 4460 AsRock H81M-DG4 Sapphire Rx470 Platinum KVR 1600 16Gb 
Hard DriveHard DriveCoolingOS
2x Seagate 3Tb Samsung 850 EVO 120 Scythe Ninja 3 Rev.B Windows 10 Pro 
MonitorKeyboardPowerCase
Fujitsu Siemens A17-2A Logitech K280e SuperFlower SF-550K12XP Thermaltake Versa H25 
MouseAudio
Logitech G402 Sony MDR XD150 
CPUMotherboardGraphicsRAM
Athlon 750K 4.0Ghz AsRock FM2A75 Pro4+ Sapphire R9 270X Dual-X Kingston 2x4Gb 1600 
Hard DriveHard DriveOptical DriveCooling
Samsung 850 EVO 120  Western Digital 320Gb LiteON DVD-RW CoolerMaster Hyper Z600 
OSMonitorKeyboardPower
Windows 7 Pro x64 Toshiba 32" FullHD TV Logitech FSP Hexa 550 
CaseMouse
DeLUX Logitech 
  hide details  
Reply
post #10 of 31
Here's what I would suggest:

1) Post your collections as a library for others on OCN to test
2) Post your collections on stack overflow challenging users to profile your collections and see if they clobber the native collections in their scenarios
3) If 1 & 2 pass with flying colors, apply at MS.
    
CPUMotherboardGraphicsRAM
Intel Core i7 920 @ 4Ghz Rampage II GENE PNY GTX 680 G.Skill Sniper (12GB) 
Hard DriveOptical DriveCoolingOS
Seagate Momentus XT SATA Optical Drive Kuhler 920 Windows 7 Ultimate 
MonitorKeyboardPowerCase
Dell U2410 Logitech G11  Silverstone ST75 750W Antec Mini P180B 
MouseMouse PadOther
Logitech MX518 Steelpad G19 Gaming Headset 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Intel Core i7 920 @ 4Ghz Rampage II GENE PNY GTX 680 G.Skill Sniper (12GB) 
Hard DriveOptical DriveCoolingOS
Seagate Momentus XT SATA Optical Drive Kuhler 920 Windows 7 Ultimate 
MonitorKeyboardPowerCase
Dell U2410 Logitech G11  Silverstone ST75 750W Antec Mini P180B 
MouseMouse PadOther
Logitech MX518 Steelpad G19 Gaming Headset 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Coding and Programming
Overclock.net › Forums › Software, Programming and Coding › Coding and Programming › [Benchmark] .NET4 vs Hand-made collections