Quantcast
Channel: Active questions tagged debugging - Database Administrators Stack Exchange
Viewing all articles
Browse latest Browse all 47

Finding the cause of a stuck "Sending Data" query in MySQL

$
0
0

I'm running MySQL on a 1gb DigitalOcean droplet, and I have a number of scheduled tasks through the Laravel framework that run. One task in particular (seemingly) randomly causes MySQL to sit at 100% CPU, and I'm not sure where to begin in tracking down the cause.

The job downloads a JSON file containing 360 values and stores (or updates) the results in the database. Once that's done, it calls another function that gets all the results from the last 4 hours (about 240 rows) and turns it into a static image. This is run every two minutes from a cron job, and if the job fails, it retries the job up to 10 times before giving up.

According to show processlist;, the query is:

mysql> show processlist;+-----+-------+-----------+---------------------+---------+------+--------------+-------------------------------------------------------------+| Id  | User  | Host      | db                  | Command | Time | State        | Info                                                        |+-----+-------+-----------+---------------------+---------+------+--------------+-------------------------------------------------------------+| 271 | dbusr | localhost | sample_database_com | Execute |    0 | Sending data | select * from `kp_minutes` where (`updated_at` = ?) limit 1 || 297 | dbusr | localhost | sample_database_com | Execute |    0 | Sending data | select * from `kp_minutes` where (`updated_at` = ?) limit 1 || 303 | dbusr | localhost | sample_database_com | Execute |    0 | Sending data | select * from `kp_minutes` where (`updated_at` = ?) limit 1 || 308 | dbusr | localhost | sample_database_com | Execute |    0 | Sending data | select * from `kp_minutes` where (`updated_at` = ?) limit 1 || 311 | dbusr | localhost | sample_database_com | Execute |    1 | Sending data | select * from `kp_minutes` where (`updated_at` = ?) limit 1 || 317 | dbusr | localhost | sample_database_com | Execute |    0 | Sending data | select * from `kp_minutes` where (`updated_at` = ?) limit 1 || 318 | dbusr | localhost | sample_database_com | Sleep   |    1 |              | NULL                                                        || 324 | dbusr | localhost | NULL                | Query   |    0 | starting     | show processlist                                            || 325 | dbusr | localhost | sample_database_com | Sleep   |    3 |              | NULL                                                        |+-----+-------+-----------+---------------------+---------+------+--------------+-------------------------------------------------------------+9 rows in set (0.00 sec)

In order to resolve this I have to clear any queued jobs in Laravel, then restart MySQL, and I end up doing this once or twice a day.

I've got other jobs that do similar things, but with far more data. For example one job inserts (up to) 500,000 items in the database, then pulls those values out and makes it into a static image. Those jobs run just fine, so I'm not sure why a (seemingly) simple select on 240 rows would cause issues.

I've tried pushing my database-heavy jobs to the end of the list in case the hundreds of thousands of rows are causing things to break. I've used Laravel's lockForUpdate() function to lock the kp_minutes rows, I've added in a delay between storing the data and generating the static image, I've read through a bunch of MySQL questions trying to find out what's going on, but I've got nothing.

How would I go about tracking down the reason for the 100% CPU, and is there anything I can do to make such "Sending data" processes time out? My MySQL install is the default that the DigitalOcean LAMP image provides, so I don't think I've messed anything up through tweaking values..

EDIT: To answer a few questions:

  • The Laravel application is running on the same server as the MySQL server. I know I shouldn't do that, but it's a (hopefully interim) cost saving measure.
  • Laravel's task scheduling system uses a single cron job to check for tasks to run. It then sends those tasks to a queue (e.g. Amazon SQS, or in my case, as a row in a database) where they're run. This "kp_minutes" task runs once every two minutes.
  • The table in question contains about 175,000 rows, but I only select about 240 of those with the where clause
  • lockForUpdate() uses for update internally: select * fromkp_minuteswhere (updated_at= ?) limit 1 for update
  • Running show full processlist gives the same results as I've posted.

Viewing all articles
Browse latest Browse all 47

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>