Little bit of google goes a long way. As from
A common myth (and one which serves to illustrate the mechanics of proper RAID implementation) is that in all deployments, RAID 10 is inherently better for relational databases than RAID 5, due to RAID 5's need to recalculate and redistribute parity data on a per-write basis. 
While this may have been a hurdle in past RAID 5 implementations, the task of parity recalculation and redistribution within modern Storage Area Network (SAN) appliances is performed as a back-end process transparent to the host, not as an in-line process which competes with existing I/O. (i.e. the RAID controller handles this as a housekeeping task to be performed during a particular spindle's idle timeslices, so as not to disrupt any pending I/O from the host.) The "write penalty" inherent to RAID 5 has been effectively masked since the late 1990s by a combination of improved controller design, larger amounts of cache, and faster hard disks.
In the vast majority of enterprise-level SAN hardware, any writes which are generated by the host are simply acknowledged immediately, and destaged to disk on the back end when the controller sees fit from an efficiency standpoint to do so. From the host's perspective, an individual write to a RAID 10 volume is no faster than an individual write to a RAID 5 volume; A difference between the two only becomes apparent when write cache at the SAN controller level is overwhelmed, and the SAN appliance must reject or gate further write requests in order to allow write buffers on the controller to destage to disk. While rare, this generally indicates poor performance management on behalf of the SAN administrator, not a shortcoming of RAID 5 or RAID 10. SAN appliances generally service multiple hosts which compete both for controller cache and spindle time with one another. This contention is largely masked, in that the controller is generally intelligent and adaptive enough to maximize read cache hit ratios while also maximizing the process of destaging data from write cache.
The choice of RAID 10 versus RAID 5 for the purposes of housing a relational database will depend upon a number of factors (spindle availability, cost, business risk, etc.) but, from a performance standpoint, it depends mostly on the type of I/O that database can expect to see. For databases that are expected to be exclusively or strongly read-biased, RAID 10 is often chosen in that it offers a slight speed improvement over RAID 5 on sustained reads and sustained randomized writes. If a database is expected to be strongly write-biased, RAID 5 becomes the more attractive option, since RAID 5 does not suffer from the same write handicap inherent in RAID 10; all spindles in a RAID 5 can be utilized to write simultaneously, whereas only half the members of a RAID 10 can be used.  However, for reasons similar to what has eliminated the "read penalty" in RAID 5, the reduced ability of a RAID 10 to handle sustained writes has been largely masked by improvements in controller cache efficiency and disk throughput.
What causes RAID 5 to be slightly slower than RAID 10 on sustained reads is the fact that RAID 5 has parity data interleaved within normal data. For every read pass in RAID 5, there is a probability that a read head may need to traverse a region of parity data. The cumulative effect of this is a slight performance drop compared to RAID 10, which does not use parity, and therefore will never encounter a circumstance where data underneath a head is of no use. For the vast majority of situations, however, most relational databases housed on RAID 10 perform equally well in RAID 5. The strengths and weaknesses of each type only become an issue in atypical deployments, or deployments on overcommitted or outdated hardware.
There are, however, other considerations which must be taken into account other than simply those regarding performance. RAID 5 and other non-mirror-based arrays offer a lower degree of resiliency than RAID 10 by virtue of RAID 10's mirroring strategy. In a RAID 10, I/O can continue even in spite of multiple drive failures. By comparison, in a RAID 5 array, any simultaneous failure involving greater than one drive will render the array itself unusable by virtue of parity recalculation being impossible to perform. For many, particularly in mission-critical environments with enough capital to spend, RAID 10 becomes the favorite as it provides the lowest level of risk.
Additionally, the time required to rebuild data on a hot spare in a RAID 10 is significantly less than RAID 5, in that all the remaining spindles in a RAID 5 rebuild must participate in the process, whereas only the spindle being created and its mirror need to participate in RAID 10. This further increases the reliability advantage of RAID 10 over RAID 5 since the window during which a second disc failure could (if it was the mirror being used in recovery that failed) cause data loss is reduced.
Again, modern SAN design largely masks any performance hit while the RAID array is in a degraded state, by virtue of selectively being able to perform rebuild operations both in-band or out-of-band with respect to existing I/O traffic. Given the rare nature of drive failures in general, and the exceedingly low probability of multiple concurrent drive failures occurring within the same RAID array, the choice of RAID 5 over RAID 10 often comes down to the preference of the storage administrator, particularly when weighed against other factors such as cost, throughput requirements, and physical spindle availability. 
In short, the choice of RAID 5 versus RAID 10 involves a complicated mixture of factors. There is no one-size-fits-all solution, as the choice of one over the other must be dictated by everything from the I/O characteristics of the database, to business risk, to worst case degraded-state throughput, to the number and type of disks present in the array itself. Over the course of the life of a database, you may even see situations where RAID 5 is initially favored, but RAID 10 slowly becomes the better choice, and vice versa.