Z-Car

What database does Facebook use?

redhead girl using computer

What database does Facebook use is one of the most common questions asked when folks start taking about what database is the most scalable for large scale web applications.   In fact, it is usually a person who is an open source proponent, and knows very well that Facebook uses MySQL as their core database engine.  Because of this fact, this is often the single biggest reason that developers use to push to get MySQL used in their company.  I would imagine that is why it is a very popular Google query.

While Facebook uses MySQL, they do not use it as-is out of the box.  In fact, their team has submitted numerous high-performance enhancements to the MySQL core and Innodb plug-in.  Their main focus has been on adding performance counters to Innodb.  Other changes focused on the IO sub-system, including the following new features :

  • innodb_io_capacity – sets the IO capacity of the server to determine rate limits for background IO
  • innodb_read_io_threads, innodb_write_io_threads – set the number of background IO threads
  • innodb_max_merged_io – sets the maximum number of adjacent IO requests that may be merged into a large IO request

Facebook uses MySQL as a key-value store in which data is randomly distributed across a large set of logical instances. These logical instances are spread out across physical nodes and load balancing is done at the physical node level.  Facebook has developed a partitioning scheme in which a global ID is assigned to all user data. They also have a custom archiving scheme that is based on how frequent and recent data is on a per-user basis. Most data is distributed randomly.  Amazingly, it has been rumored that Facebook has 1800 MySQL servers, but only 3 full-time DBAs.

Facebook primarily uses MySQL for structured data storage such as wall posts, user information, etc. This data is replicated between their various data centers. For blob storage (photos, video, etc.), Facebook makes use of a custom solution that involves a CDN externally and NFS internally.

It is also important to note that Facebook makes heavy use of Memcache,  a memory caching system that is used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce reading time. Memcache is Facebook’s primary form of caching and greatly reduces the database load. Having a caching system allows Facebook to be as fast as it is at recalling your data. If it doesn’t have to go to the database it will just fetch your data from the cache based on your user ID.

So, while “What database does Facebook use?” seems like a simple question, you can see that they have added a variety of other systems to make it truly web scalable.  But, still feel free to use the argument, “MySQL is as good or better than Oracle or MS SQL Server, heck, even Facebook uses it, and they have 500 Million users!”.


MySQL – Innodb Slow Shutdown – Dirty Buffer Pages

girl working on computer

If you have a MySQL server that reserves a large amount of memory for the Innodb buffer pool, you may have seen that it can often take a very long time to shutdown.  During the shutdown process, the server will not be able to serve any requests.  From my experience, I have seen that it can take 1-5 minutes per GB of buffer to perform a shutdown.  So, for example, if you have a server with 50G of memory allocated to the Innodb Buffer Pool, and 50% of the buffer is dirty (unwritten to disk), it can take anywhere from 25-125 minutes to shutdown.  Multiple hour shutdowns on large, busy Innodb MySQL servers is not unheard of.

Luckily, there is a fairly easy way to speed up shutdown while still allowing the server to stay up and handle requests.  The secret is to lower the maximum percentage of dirty pages allowed.  This is controlled by the global variable innodb_max_dirty_pages_pct. On my servers I usually have this set at 75%. This variable can be changed at run-time, which allows us to modify this variable to speed up system shutdown.

First, enter the command :

set global innodb_max_dirty_pages_pct=0;

You can then enter the following to verify the new value:

show global variables where variable_name rlike ‘dirty’;

To monitor the number of dirty pages that have not been written to disk, enter the following:

show global status where variable_name rlike ‘dirty’;

You will see this number begin to drop, although it is unlikely to ever reach 0, once it drops to a low level and reaches a plateau, you can then shutdown the server. You will find that the server will shutdown very quickly, allowing you to minimize downtime.


Top My.CNF Optimizations for Innodb

worried-girl-computer

These are the four parameters which will have the most affect on the performance of MySQL 5.1x and Innodb.   If you are suffering from poor performance, try changing the following settings.  Remember, each individual situation will vary, and in many cases, the actual design of your queries will have more to do with your overall performance than any system tuning tricks.  The best tool I have found to capture, review, and analyse query performance for MySQL is Jet Profiler for MySQL.  If you need help optimizing your queries, let us know.

innodb_buffer_pool_size =8G
Set this to ~80% of free server memory.  For example, if you have a dedicated MySQL server with 10G, set to 8G

innodb_flush_log_at_trx_commit =0
Setting this to 0 will have a huge performance improvement, however your data is at somewhat more if you have a hardware failure

sync_binlog =0
Setting this to 0 will have a huge performance improvement, however your data is at somewhat more if you have a hardware failure

innodb_flush_method=O_DIRECT
On many systems, this will provide a performance improvement.  However, this can actually have a negative affect, so make sure you test appropriately.

MySQL – Constraints on Hierarchical Data in a Self-Referential Table

blonde-woman-working-on-her-laptop-from-home-5

I recently had a client who was using a table structure that utilized a self-referential structure using an ID field and Parent_ID fields. After upgrading to Innodb, his DBA was trying to utilize a foreign key constraint to perform cascading UPDATE and DELETE statements. After much heart-ache, they called me in to find out why it was not working. Unfortunately, I had to share with them the fact that MySQL does not support this type of use of constraints on self-referential tables.

This deviation from SQL standards results affects an ON UPDATE CASCADE or ON UPDATE SET NULL that recurses to update the same table it has previously updated during the cascade. Instead of cascading, it acts like RESTRICT. This means that you cannot use self-referential ON UPDATE CASCADE or ON UPDATE SET NULL operations in MySQL. According to the Innodb developers, this is to prevent infinite loops resulting from cascaded updates, although I would think this could be addressed in future versions. A self-referential ON DELETE SET NULL, on the other hand, is still possible, as is a self-referential ON DELETE CASCADE.


MySQL – ERROR 1005: Can’t create table (errno: 150) – INNODB

Blonde-Computer-small

If you have seen this error, don’t worry, you are not alone. This error is often accompanied by a message that a table or file could not be created, and usually happens when creating a foreign key. In my experience, 99% of the time this is due to an incompatibility between the two fields in the foreign key. Usually it is something simple like unsigned integer to signed integer. The trickiest I have seen is when trying to create a foreign key between two CHAR fields and they do not share the same CHARSET and COLLATE. ALTER the table so that the CHARSET and COLLATE are the same, and try to add the foreign key again.

Let me know if you have other examples of how you have worked around errno: 150 when adding foreign keys.


MySQL – Best way to speed up Slave replication

hot-woman

The number one thing that you can do to speed up Slave Replication is to set innodb_flush_log_at_trx_commit=0 in your my.cnf file. This will make the transactions less recoverable on your Slave in case of a crash, however with a Slave this is usually an acceptable risk. This setting prevents MySQL from forcing a fsync after every transaction, allowing transactions to be batched up and all fsynced in one operation. When using slower HD RAID’s, this is a huge performance benefit.

Setting sync_binlog=0 will also prove to be beneficial, but also at some level of additional risk.