i’ve been hitting my head against a wall trying to figure some odd db behavior, perhaps someone could give me a hand:
i have two identical insert
statements with 5000 rows. executed individually they are pretty quick (ms). executed under a dorun pmap
, one of them will timeout at the default of 50 s, and the other will take 4 minutes or more. what’s going on here?
(timeout = lock wait timeout)
here’s a description of the table:
+-------+------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+------------+------+-----+---------+----------------+
| id | bigint(20) | NO | PRI | NULL | auto_increment |
| x | binary(32) | NO | UNI | NULL | |
| y | mediumblob | YES | | NULL | |
+-------+------------+------+-----+---------+----------------+
i’m inserting x-y pairsalternatively what are some metrics i could have a look at to figure out the problem?
sorry, critical point, both queries are the same. namely have the same rows
(and the statement is insert ignore
)
Well, pmap
is almost never the right solution for concurrency... but... are you trying to use a single connection across both inserts? Are you trying to do them inside a transaction?
no, they are separate jdbc execute statements
it’s not so much that i need pmap as i’m trying to replicate a db-layer bug that i have elsewhere, and this setup is exhibiting the same behaviour
And what type of db spec / datasource are you passing in the execute?
Also, which JDBC library are you using? clojure.java.jdbc
or next.jdbc
? And which database is this?
{
:host "localhost"
:dbtype "mysql"
:password "test"
:rewriteBatchedStatements true
:user "user"
:allowPublicKeyRetrieval true
:useSSL false
:useUnicode true
:characterEncoding "UTF-8" }
clojure.java.jdbc, mysql | innodb_version | 5.7.29 |
(jdbc/execute! conn (concat ["INSERT IGNORE INTO table (x,y) VALUES (?,?)"] rows) {:transaction? false :multi? true})
OK. So each execute
call will stand up a new connection to the DB. You'll definitely get contention in mysql from attempting two large locking inserts at the same time.
Oh, so you're avoiding a transaction around each SQL op and you are trying a single batch operation with 5,000 sets of parameters in the batch...
yeah
no good?
rewriteBatchedStatements
isn't going to help you here FYI
auto_increment
. could it be the key state? would using UUID
make it easier?
ah ok - i’ll revisit that. may be an artifact of an earlier version
hmmm, interesting thought
MySQL doesn't handle concurrent large inserts on a single table very well in my experience. You tend to get deadlocks.
Are you wrapping the (two) calls with your own transaction? If not, why are you trying avoid c.j.j setting up a transaction?
i believe it was changed to try and avoid deadlock issues, but it seems it hasn’t achieved that
alright, so perhaps the solution is to re-enable that and write a bit more stuff on top to handle ‘concurrency’
it does seem that enabling it at least fails fast with a deadlock error instead of spinning for a minute