While exiting a MySQL function is fairly straight-forward, simply use the RETURN keyword, exiting a Stored Procedure is not quite as obvious. There is no EXIT keyword, however you can use the LEAVE keyword and specify a label that is associated with the BEGIN of the Stored Procedure. In this way, you are defining the Stored Procedure with a Label, which the LEAVE statement can then act upon.
As an example, look at the following Stored Procedure :
CREATE PROCEDURE TestProc(Value INT)
IF Value is null or Value=0 then
Select ‘Invalid Value’;
Select * from Table twhere t.Value=Value;
MySQL 5.0+ gave DBA’s the ability to write stored procedures and functions. This is a great addition, and one that all “real” DB engines should provide. But, as many DBA have found out, writing stored procedures using most MySQL clients can be a bit of a challenge. Usually after writing their first attempt, they are greeted with, “Error Code : 1064 You have have an error in your SQL syntax;”.
So, what is going on? Well, by default the MySQL statement delimiter is the semi-colon (;). However, the mysql command-line utility also uses a semi-colon as a delimiter. So, if the command-line utility were to interpret the ; characters inside of the stored procedure itself, those would not end up becoming part of the stored procedure, and that would make the SQL in the stored procedure syntactically invalid.
The solution is to temporarily change the command-line utility delimiter using the DELIMITER command, as seen here:
CREATE PROCEDURE sku_pricing( OUT low_price DECIMAL(8,2), OUT high_price DECIMAL(8,2), OUT avg_price DECIMAL(8,2) )
SELECT Min(prod_price) INTO low_price FROM sku_items;
SELECT Max(prod_price) INTO high_price FROM items;
SELECT Avg(prod_price) INTO avg_price FROM items;
Here, DELIMITER // tells the command-line utility to use // as the new end of statement delimiter, and you will notice that the END that closes the stored procedure is defined as END // instead of the expected END;. This way the ; within the stored procedure body remains intact and is correctly passed to the database engine, rather than being interpreted by the client. And then, to restore things back to how they were initially, the statement closes with a DELIMITER ;. Any character may be used as the delimiter except for , as it is an escape character in mySQL.
Everyone knows that using the MySQL command Show Processlist will display all current connections, like the following :
mysql> show processlist;
| Id | User | Host | db | Command | Time | State | Info |
| 1 | root | localhost:32893 | NULL | Sleep | 0 | | NULL |
| 5 | system user | | NULL | Connect | 98 | Waiting for master to send event | NULL |
| 6 | system user | | NULL | Connect | 5018 | Reading event from the relay log | NULL |
3 rows in set (0.00 sec)
What is little known however is that the Show Full Processlist command will display the full query in the Info column. By default, Show Processlist will only display the first 100 characters. One thing to keep in mind about the Info line is that while the statement (query) might be the one sent to the server, it will display the innermost statement if the statement executes other statements.
I have run into many situations where there would be a benefit to not replicate a large INSERT or UPDATE from the MASTER to SLAVE in order to not block replication for a long period of time. Luckily MySQL provides a means to disable binary logging for your current session. Simply execute SET SQL_BIN_LOG=0 before the SQL Statement that you do not want logged to the BINLOG. This is a session variable, meaning it will be re-enabled when you close the session, or you can set it back to 1.
In order to keep the Slave in-sync, you will need to execute the same query on the Slave. You can also use mk-table-sync from the Maatkit toolkit to re-sync your table data at a later time.
Using ALTER IGNORE TABLE [TABLENAME] ADD UNIQUE INDEX `UNIQUE_INDEX` ([FIELDNAME]) to remove duplicate rows in a table is a fast an efficient process, however on large tables where the physical size is larger than server memory, the ALTER statement can take a long time to run in a production environment.
If you need to remove duplicates on a very large table (we recently used this on a table of 77 million rows), try this method :
delete t1 from table t1, table t2
where t1.duplicate_field= t2.duplicate_field (add more if need ie. and t1.duplicate_field2=t2.duplicate_field2)
and t1.unique_field > t2.unique_field
and breakup into ranges to run faster
If you use an auto-incrementing ID field as the primary key, use this as your unique field, and in the Where clause to run on a range of records to break into smaller operations.