MyBB Community Forums

Full Version: Too many connections error!
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
Hi All, I need some help finding what's causing my forums to return "Too many connection" error.

I have several forums (around 4-5) with usually about a total of over 1k "currently on" at any given time.

I'm currently on hostgator's standard dedicated server. More details can be seen at their website: http://www.hostgator.com/dedicated

Previously everything was fine but recently all my sites have been getting alot of "Too many connection" errors. My traffic haven't grown much recently so this seems to be a sideeffect of something else slowing my server down.

Server load is normally fine but then all of a sudden it starts to spike and php commands start to build up and finally the "Too many connection" error occurs.

here's my my.cnf settings:

[mysqld]
skip-external-locking
key_buffer_size = 1024M
table_open_cache = 512
sort_buffer_size = 4M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 32M
thread_cache_size = 20
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8

#innodb_file_per_table=1 # Ensure that each innodb table is it's own binary data block just in case there's corruption.
query_cache_size=128M
max_allowed_packet=20M # don't change unless required for large blobs
table_cache=2048 # max 2048, can be increased if more Opened tables - SHOW STATUS LIKE 'Opened_tables';
wait_timeout=250 # can be increased if using persistent connections
interactive_timeout=25
#max_user_connections=25
open_files_limit=16384

#delayed_insert_timeout=20 # Turn on if max_connections being reached due to delayed inserts
#delayed_queue_size=300 # Turn on if max_connections being reached due to delayed inserts


collation_server=utf8_unicode_ci
character_set_server=utf8

#innodb_flush_method=O_DIRECT # Do not enable this without a good reason - it doesn't work well on new dedis
innodb_buffer_pool_size=1024M # check mysql -e "SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool%';" - free vs total
innodb_thread_concurrency=4 # Number of physical + virtual CPU's, be careful of adding more

tmpdir=/dev/shm
tmp_table_size = 384M
max_heap_table_size = 384M
max_connections=200 # Should be between 100-150, increase *slowly* because it causes MySQL to consume more memory!

Most of my bigger tables have been switched to innodb.

Can someone please help me find what's causing this. Sad
How much data do you have in total in your InnoDB tables? If innodb_buffer_pool_size isn't big enough to fit the majority of it in you may find that a full table scan/poorly indexed query on one of the large InnoDB tables could cause it to swap out a bunch of the data it had loaded and then the connection limit is hit as other queries need to wait for the data to be read back from disk. Just a theory.
http://www.mysqlperformanceblog.com/2007...pool_size/ has a bit on the best size for that variable.

Also, I'd be wary of storing temporary tables in /dev/shm as the reason large temporary tables are written to disk is to stop them taking up too much RAM, so you could possibly be running out of RAM causing the OS to page things which makes everything else run slower, resulting in your database connections getting backed up.
(2014-05-31, 05:46 AM)Cameron:D Wrote: [ -> ]How much data do you have in total in your InnoDB tables? If innodb_buffer_pool_size isn't big enough to fit the majority of it in you may find that a full table scan/poorly indexed query on one of the large InnoDB tables could cause it to swap out a bunch of the data it had loaded and then the connection limit is hit as other queries need to wait for the data to be read back from disk. Just a theory.
http://www.mysqlperformanceblog.com/2007...pool_size/ has a bit on the best size for that variable.

Also, I'd be wary of storing temporary tables in /dev/shm as the reason large temporary tables are written to disk is to stop them taking up too much RAM, so you could possibly be running out of RAM causing the OS to page things which makes everything else run slower, resulting in your database connections getting backed up.

My biggest forum has a "total database size" of 2g. I have several forums on that server and adding all the the "total database sizes" together comes out to be about 2g.

Not all my datatbases are innodb although most of the big ones are.

Does this mean I should increase my innodb_buffer_pool_size to about 2g then?

I've raised the innodb_buffer_pool_size and changed some other settings but I'm still getting "too many connections". Here is the updated my.cnf:

[mysqld]
skip-external-locking
key_buffer_size = 1024M
table_open_cache = 512
sort_buffer_size = 4M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 32M
thread_cache_size = 20
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8

#innodb_file_per_table=1 # Ensure that each innodb table is it's own binary data block just in case there's corruption.
query_cache_size=128M
max_allowed_packet=20M # don't change unless required for large blobs
table_cache=1024 # max 2048, can be increased if more Opened tables - SHOW STATUS LIKE 'Opened_tables';
wait_timeout=250 # can be increased if using persistent connections
interactive_timeout=25
#max_user_connections=25
open_files_limit=16384

#delayed_insert_timeout=20 # Turn on if max_connections being reached due to delayed inserts
#delayed_queue_size=300 # Turn on if max_connections being reached due to delayed inserts


collation_server=utf8_unicode_ci
character_set_server=utf8

#innodb_flush_method=O_DIRECT # Do not enable this without a good reason - it doesn't work well on new dedis
innodb_buffer_pool_size=2048M # check mysql -e "SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool%';" - free vs total
innodb_thread_concurrency=8 # Number of physical + virtual CPU's, be careful of adding more

tmpdir=/dev/shm
tmp_table_size = 384M
max_heap_table_size = 384M
max_connections=150 # Should be between 100-150, increase *slowly* because it causes MySQL to consume more memory!

I've also ran the mysql tuner and here's the output:

 >>  MySQLTuner 1.3.0 - Major Hayden <[email protected]>
 >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/
 >>  Run with '--help' for additional options and output filtering
[OK] Currently running supported MySQL version 5.5.36-cll
[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM 

[--] Data in MyISAM tables: 810M (Tables: 519)
[--] Data in InnoDB tables: 1G (Tables: 394)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[--] Data in MEMORY tables: 0B (Tables: 4)
[!!] Total fragmented tables: 497

-------- Security Recommendations  -------------------------------------------
[OK] All database users have passwords assigned

-------- Performance Metrics -------------------------------------------------
[--] Up for: 5m 19s (53K q [169.219 qps], 3K conn, TX: 701M, RX: 11M)
[--] Reads / Writes: 72% / 28%
[--] Total buffers: 3.5G global + 14.4M per thread (150 max threads)
[!!] Maximum possible memory usage: 5.6G (150% of installed RAM)
[OK] Slow queries: 0% (7/53K)
[OK] Highest usage of available connections: 21% (32/150)
[OK] Key buffer size / total MyISAM indexes: 1.0G/381.5M
[OK] Key buffer hit rate: 97.7% (1M cached / 25K reads)
[OK] Query cache efficiency: 42.8% (14K cached / 33K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (2 temp sorts / 5K sorts)
[OK] Temporary tables created on disk: 6% (121 on disk / 1K total)
[OK] Thread cache hit rate: 98% (41 created / 3K connections)
[OK] Table cache hit rate: 98% (462 open / 469 opened)
[OK] Open file limit used: 3% (502/16K)
[OK] Table locks acquired immediately: 99% (35K immediate / 35K locks)
[OK] InnoDB buffer pool / data size: 2.0G/1.7G
[OK] InnoDB log waits: 0
-------- Recommendations -----------------------------------------------------
General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    MySQL started within last 24 hours - recommendations may be inaccurate
    Reduce your overall MySQL memory footprint for system stability
    Enable the slow query log to troubleshoot bad queries

* Note: MySQL started within last 24 hours - recommendations may be inaccurate

I will rerun the script tomorrow and to see if anything changes.

Please need help here!
Quickly looking at your MySQLTuner output I see the following:

[!!] Total fragmented tables: 497
Find your fragmented tables and optimise them. PHPMyAdmin has the ability to easily do this - open any database, at the bottom of the list of tables select the "Select tables with overhead" option and from the action dropdown menu select optimise tables.

[!!] Maximum possible memory usage: 5.6G (150% of installed RAM)
So here you see that you're going to need to cut down on memory usage, so we need to find things to reduce.

[OK] Highest usage of available connections: 21% (32/150)
Obviously the server hasn't been up long, wait a day (or a few) and see what this value has reached. Reduce your max_connections to be just a bit above that, but I wouldn't have it any higher than where it currently is.

[OK] Key buffer size / total MyISAM indexes: 1.0G/381.5M
Your MyISAM key buffer is bigger than it needs to be, key_buffer_size can be reduced to 400M or so.

[OK] Open file limit used: 3% (502/16K)
Check tis value again later, but open_files_limit can easily be reduced to 1024 or so by the looks of it.

[OK] InnoDB buffer pool / data size: 2.0G/1.7G
So all of your InnoDB data totals 1.7G, the 2G pool is good, you could reduce it a bit, to 1.8G, but it will grow with time as more data is added. That said, 2G is half your RAM, which is a lot, ontop of the other services that are running on the server.

I'll cross-check your SQL config with mine tomorrow at some stage.
So I've just experience my servers going from about 3-4 server load with no problems to suddenly spiking over 100 server load and sites returning too many connections. Restarting mysql seems to fix it until it randomly hits again...

Afterwards running mysql tuner then showed something like:

Highest usage of available connections: 101% (151/150)

Anyways, I've gone through all my databases and optimized ones with overhead but there are still 431 fragmented tables.

I've also lowered my Key buffer size / total MyISAM indexes to 500M and Open file limit to 2064.

It doesn't feel like the issue is my my.cnf anymore though...the issue seems to hit out of no where and hard. Any ideas?
You can't run an OPTIMIZE query on certain engines such as InnoDB. You could force it to rebuild though by running
ALTER TABLE tablename SET ENGINE=InnoDB

That would get rid of any overhead.
Woke up to my server returning "Too many connections"

ran the mysql tuner and got:

>>  MySQLTuner 1.3.0 - Major Hayden <[email protected]>
 >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/
 >>  Run with '--help' for additional options and output filtering
[OK] Currently running supported MySQL version 5.5.36-cll
[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM 

[--] Data in MyISAM tables: 805M (Tables: 519)
[--] Data in InnoDB tables: 1G (Tables: 394)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[--] Data in MEMORY tables: 0B (Tables: 4)
[!!] Total fragmented tables: 454

-------- Security Recommendations  -------------------------------------------
[OK] All database users have passwords assigned

-------- Performance Metrics -------------------------------------------------
[--] Up for: 7h 43m 48s (6M q [218.909 qps], 381K conn, TX: 82B, RX: 1B)
[--] Reads / Writes: 71% / 29%
[--] Total buffers: 3.0G global + 14.4M per thread (150 max threads)
[!!] Maximum possible memory usage: 5.1G (136% of installed RAM)
[OK] Slow queries: 0% (2K/6M)
[!!] Highest connection usage: 100%  (151/150)
[OK] Key buffer size / total MyISAM indexes: 500.0M/381.8M
[OK] Key buffer hit rate: 99.9% (127M cached / 82K reads)
[OK] Query cache efficiency: 46.7% (1M cached / 3M selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (49 temp sorts / 585K sorts)
[OK] Temporary tables created on disk: 6% (12K on disk / 203K total)
[OK] Thread cache hit rate: 99% (1K created / 381K connections)
[OK] Table cache hit rate: 23% (1K open / 4K opened)
[OK] Open file limit used: 37% (828/2K)
[OK] Table locks acquired immediately: 99% (3M immediate / 3M locks)
[OK] InnoDB buffer pool / data size: 2.0G/1.7G
[OK] InnoDB log waits: 0
-------- Recommendations -----------------------------------------------------
General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    MySQL started within last 24 hours - recommendations may be inaccurate
    Reduce your overall MySQL memory footprint for system stability
    Enable the slow query log to troubleshoot bad queries
    Reduce or eliminate persistent connections to reduce connection usage
Variables to adjust:
  *** MySQL's maximum memory usage is dangerously high ***
  *** Add RAM before increasing MySQL buffer variables ***
    max_connections (> 150)
    wait_timeout (< 200)
    interactive_timeout (< 20)

I'm currently asking the hosting company to help me update mysql and turn a couple of my forums to innodb. I've actually experienced this issue before and had this done and it seemed to have helped.

My host changed some tables to innodb and updated my.cnf to:

[mysqld]
innodb_file_per_table=1 # Ensure that each innodb table is it's own binary data block just in case there's corruption.
query_cache_size=64M
thread_cache_size=4 # can be increased on servers with large numbers of active users
key_buffer_size=32M
max_allowed_packet=16M # don't change unless required for large blobs
table_open_cache=512 # max 2048, can be increased if more Opened tables - SHOW STATUS LIKE 'Opened_tables';
wait_timeout=300 # can be increased if using persistent connections
max_user_connections=25
open_files_limit=16384

#delayed_insert_timeout=20 # Turn on if max_connections being reached due to delayed inserts
#delayed_queue_size=300 # Turn on if max_connections being reached due to delayed inserts

myisam_sort_buffer_size=2M # can be increased per sessions if needed for alter tables (indexes, repair)

#query_cache_limit=2M # leave at default unless there is a good reason
#join_buffer=2M # leave at default unless there is a good reason
#sort_buffer_size=2M # leave at default unless there is a good reason
#read_rnd_buffer_size=256K # leave at default unless there is a good reason
#read_buffer_size=2M # leave at default unless there is a good reason

collation_server=utf8_unicode_ci
character_set_server=utf8

#innodb_flush_method=O_DSYNC # Do not enable this without a good reason - it doesn't work well on new dedis
innodb_buffer_pool_size=128M # check mysql -e "SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool%';" - free vs total
innodb_thread_concurrency=4 # Number of physical + virtual CPU's, be careful of adding more

tmp_table_size = 384M
max_heap_table_size = 384M
max_connections=125 # Should be between 100-150, increase *slowly* because it causes MySQL to consume more memory!

table_definition_cache=512 # increase by the same factor as table_open_cache
innodb_flush_log_at_trx_commit=2
innodb_log_buffer_size=64M

general_log=0
slow_query_log=0
log-output=TABLE # select * from mysql.general_log order by event_time desc limit 10;
long_query_time=20 # select * from mysql.slow_log order by start_time desc limit 10;

innodb_io_capacity=100

This config really boggles my mind because everything seems to be set so low, especially innodb_buffer_pool_size and key_buffer_size. Weird thing is it seems to have lowered my normal server load by half. Dunno if this will resolve my "too many connections" error sporadically happening though, but fingerscross.

Can anyone makes sense of this new my.cnf settings?

I've also noticed, max_heap_table_size and tmp_table_size are very high.
Can anyone help me make sense of this new my.cnf?
My guess would still be this:

(2014-05-31, 05:46 AM)Cameron:D Wrote: [ -> ]so you could possibly be running out of RAM causing the OS to page things to disk which makes everything else run slower, resulting in your database connections getting backed up.

and by reducing the size of all the buffers you're freeing up RAM, the tradeoff there is that queries are likely to run slower as it needs to read the data/indexes from disk instead of RAM, but that is better than the site not functioning at all.
(2014-06-03, 06:04 AM)Cameron:D Wrote: [ -> ]My guess would still be this:

(2014-05-31, 05:46 AM)Cameron:D Wrote: [ -> ]so you could possibly be running out of RAM causing the OS to page things to disk which makes everything else run slower, resulting in your database connections getting backed up.

and by reducing the size of all the buffers you're freeing up RAM, the tradeoff there is that queries are likely to run slower as it needs to read the data/indexes from disk instead of RAM, but that is better than the site not functioning at all.

What about the super high table_sizes?

tmp_table_size = 384M
max_heap_table_size = 384M

Also, considering mysql tuner now shows I'm only using 26% of my total memory, I should be able to up some of the buffers right? What's a safe percentage of total memory I can use?
Pages: 1 2