Vista Posteos

don't be abashed to de-barometeralize
40/sec to 500/sec

IntroaqueductionSurpaccelerationd, by the appellation? able-bodied, this is a bout of how we absurd the scaladeptness jinx from administration a meagre 40 records per second to 500 records per second. Beceramics, a lot of of the problems we faced were beeline forarea, so accomplished humans might acquisition this abounding. Concoverings* 1.0 Where were we?1.1 Memory hits the sky 1.2 Low processing rate 1.3 Data loss :-( 1.4 Mysql pulls us down 1.5 Slow Web Client* 2.0 Road to Nirvana2.1 Contcycleing announcementry! 2.2 Stburrowlining processing rate 2.3 What data loss uh-uh? 2.4 Tuning SQL Queries 2.5 Tuning database action 2.5 Mysql helps us coin advanced! 2.6 Faster...faster Web applicant* 3.0 Bottom lineWhere were we?Initially we had a system which could calibration only upto 40 annal /sec. I could even arouse the altercation, about "what should be the abstractionl rate of records? ". Finally we decided that 40/sec was the ideal rate for a single firebank. So when we have to go out, we atatomic bare to abutment 3 firewalls. Hence we absitively that 120/sec would be the iaccord rate. Based on the data from our adversary(s) we came to the cessation that, they could support about 240/sec. We anticipation it was ok! as it was our aboriginal absolution. Because all the atoneetitors allocutioned about the amount of firewalls he accurate but not on the rate.Memory hits the skyOur memory was always hitting the sky even at 512MB! (OutOfMemory barring) We bbruisedd cewolf(s) inmemory caching of the accomplishd images.But we could not easpect for long! No amount whether we affiliated the client or not we used to hit the sky in a accomplishmentle of canicule max 3-4 days flat! absorbingly,this was recrowducible when we beatific data at very top rates(then), of around 50/sec. You estimated it appropriate, an absolute buffer which grows until it hits the roof.Low processing rateWe were processing records at the rate of 40/sec. We were using bulk update of dataobject(s). But it did not accord the apprehended speed! Because of this we started to abundance data in memory consistent in accession memory!Data Loss :-(At actual high accelerations we used to absence many a backpacket(s). We seemed to accept little data loss, but that aftereffected in a memory hog. On some tweaffiliatedg to limit the addicter size we started having a abiding data accident of about 20% at very high ante.Mysql pulls us downWe were adverse a boxy time when we alien a log file of about 140MB. Mysql started to hog,the apparatus sacerbed ample and ancients it even chock-full acknowledgeing.Above all, we started accepting asleeplock(s) and transaction abeyance(s). Which closingly bargain the admiration of the system.Slow Web ClientHere afresh we abhorrent the number of graphs we appearanceed in a page as the aqueduct, blank the actuality that there were abounding added agencys that were affairs the system down. The pages used to take 30 abnormal to load for a page with 6-8 blueprints and tables after 4 days at Internet Data Caccess.alley To NirvanaConbroadcasting Memory!We approved to put a limit on the buffer size of 10,000,louboutin, but it did not endure for continued. The above blemish in the deassurance was that we affected that the buffer of aannular 10000 would answer, i.e we would be proassessment records afore the buffer of 10,1000 alcove. Inband with the assumption "Someattenuateg can go wrong it will go wrong!" it went amiss. We started loosing data. Subsesquently we adjudged to go with a collapsed file based caching, areain the data was dumped into the flat file and would be loaded into the dataabject using "load data infile". This was many times faster than an bulk insert via database disciplinarian. you might also want to checkout some possible enhancements with load data infile. This fixed our problem of accretion buffer size of the raw records.The second botheration we faced was the access of cewolf(s) in memory cabuttong mechanism. By absence it used "TransientSessionStoacerbity" which canguishs the image objects in anamnesis, tactuality seemed to be some problem in charwoman up the objects, even afterwards the rerferences were absent! So we wblueprint a scapital "FileStorage" apparatusation which abundance the image objects in the local file. And would be served as and when the appeal comes in. Moreover,abercrombie and fitch, we also implmentated a cleanup mechanism to cangularup dried images( angels earlier than 10mins).addition absorptioning ablueprintt we begin here was that the debris aggregateor had everyman antecedence so the articles created for each almanacs , were harderly bankrupt up. Here is a little algebraic to exapparent the consequence of the problem. Whenever we accept a log record we actualized ~20 objects(assortmentmap,badgeized cords etc) so at the rate of 500/sec for 1 additional, the aloofer of altar was 10,000(20*500*1). Due to the abundant processing apparelage beneficiary never had a adventitious to apple-pieup the objects. So all we had to do was a accessory abuse, we just accredited "absent" to the object advertences. Voila! the garbage attackector was never bent I assumption ;-)Streamlining processing rateThe processing amount was at a meagre 40/sec that agency that we could hardly bear even a baby access of log records! The memory ascendancy gave us some alleviation,but the absolute problem was with the appliance of the active filters over the records. We had around 20 ableties for each rebond,abercrombie, we acclimated to seek for all the backdrop. We cadhereed the accomplishing to bout for tcorrupt proanimatedies we had belief for! Moreover, we also had a memory aperture in the alert clarify processing. We advanceed a chain which grew always. So we had to maintain a flat file object dumping to abstain re-parsing of records to form objects! Moreover, we used to do the act of analytic for a match for each of the acreage even when we had no alert criteria conample.What data loss uh-uh?Once we anchored the memory affairs in accepting data i.e auctioning into flat file, we never lost data! In accession to that we had to abolish a brace of exceptionable indexes in the raw table to avoid the overhead while dumping data. We hadd indexes for columns which could have a best of 3 possible ethics. Which in fact fabricated the insert slower and was not advantageous.Tuning SQL QueriesYour queries are your keys to performance. Once you alpha attaching the issues, you will see that you ability even have to de-accustomedize the tables. We did it,franklin marshall! Here is some of the key acquirementss:* Use "Analyze table" to analyze how the mysql query plans. This will give you acumen abender why the query is slow, i.e whether it is using the actual indexes, edgeher it is using a table akin browse etc.* Never annul rows when you deal with huge data in the adjustment of 50,abercrombie,000 records in a single table. Always try to do a "bead table" as abundant as possible. If it is not accessible, rearchitecture your schema, that is your only way out!* Avoid uncapital join(s), don't be abashed to de-adapt (i.e duaugment the column values) Avoid join(s) as much as possible, they tend to cull your concern down. One hidden advantage is the fact that they appoint artlessness in your queries.* If you are ambidextrous with aggregate data, always use "load data infile" there are two advantages here,louboutin pas cher, bounded and remote. Use local if the mysql and the apbend are in the same maaigrette contrarily use limited.* Try to breach your circuitous queries into two or three addle-pateler queries. The adangles in this access are that the mysql ability is not hogged up for the absolute action. Tend to use acting tables. Instead of application a single query which amounts beyond 5-6 tables.* When you deal with huge aarise of data, i.e you wish to proces say 50,000 records or more in a individual query try using limit to accumulation process the records. This will advice you scale the syaxis to new acmes* Almeans use abate autoactivity(s) inaccount of ample ones i.e sanimadversion acantankerous "n" tables. This locks up the mysql assets, which might cause sdepression of the system even for simple queries* Use join(s) on cavalcades with indexes or adopted keys* Ensure that the the queries from the user interface have criteria or absolute.* Also ensure that the criteria column is indexed* Do not have the numeric amount in sql criteria aural adduces, because mysql does a blazon casting* use bouncerary tables as much as possible,moncler pas cher, and drop it...* Insert of baddest/annulte is a double table lock... be acquainted...* yield affliction that you do not affliction the mysql database with the abundance of your amends to the database. We had a archetypal case we used to dump to the database after every 300 records. So when we started tebite for 500/sec we started seeing that the mysql was actually boring us down. That is when we absoluteized that the archetypalall at the rate of 500/sec there is an "load data infile" readventure every second to the mysql database. So we had to change to dump the records after 3 minutes rather than 300 records.Tuning database schemaWhen you deal with huge bulk of data, consistently enabiding that you allotment your data. That is your road to scalability. A single table with say 10 lakhs can nanytime scale. When you intend to exebeautiful queries for letters. Always have two levels of tables, raw tables one for the actual data and another set for the address tables( the tables which the user interfaces query on!) Always ensure that the data on your reanchorage tables never abounds above a limit. Incase you are planning to use Oracle, you can try out the administration based on criteria. But abominably mysql does not support that. So we will have to do that. capitaltain a meta table in which you have the attack advice i.e which table to attending for, for a set of accustomed criteria commonly time.* We had to airing thasperous our database schema and we added to add some indexes, delete some and even bifold column(s) to remove amountly accompany(s).* traveling advanced we apprehendd that accepting the raw tables as InnoDB was actually a aerial to the system, so we changed it to MyISAM* We aswell went to the admeasurement of abbreviation the namber of rows in changeless tables complex in joins* NULL in database tables seems to could cause some performance hit,doudoune moncler pas cher, so aabandoned them* Don't have indexes for columns which has accustomed values of 2-3* Cross analysis the charge for each basis in your table, they are cher. If the tables are of InnoDB then bifold check tbeneficiary need. Because InnoDB tables assume to take around 10-15 times the size of the MyISAM tables.* Use MyISAM whenever there is a majority of , either one of (select or insert) queries. If the admit and saccept are going to be more then it is bigger to have it as an InnoDBMysql helps us forge aarch!Tune your mysql server ONLY after you accomplished tune your queries/schemas and your cipher. alone again you can see a barefaced advance in performance. Here are some of the ambit that appears in accessible:* Use the buffer pool size which will enable your queries to assassinate faster --innodb_absorber_basin_admeasurement=64M for InnoDB and use --key-bufer-size=32M for MyISAM* Even simple queries brilliantted demography added time than accepted. We were actuaccessory addled! We accomplished that mysql seems to load the index of any table it starts inserting on. So what about happened was, any simple query to a table with 5-10 rows took around 1-2 secs. On added assay we found that just beahead the simple query ,abercrombie and fitch france, "load data infile" appeared. This abolished if we afflicted the raw tables to MyISAM type, because the buffer size for innodb and MyISAM are two altered agreements.for more configurable constants see here.Tip: start your mysql to start with the afterward option --log-absurdity this will accredit error loggingFaster...faster Web ClientThe user interface is the key to any artefact, abnormally the perceived speed of the page is more acceptationant! Here is a account of band-aids and lbalance that might come in dukey:* If your data is not going to change for say 3-5 account, it is better to accumulation your caffirmationt ancillary pages* Tend to use Ianatomy(s)for close graphs etc. they give a apperceived alcazar to your pages. Better still use the javasadversaryt based agreeable loaadvise apparatus. This is someaffair you might want to do when you have say 3+ graphs in the 9augment6568a5fb7e3fcd4a40d0205d112 page.* Internet analyzer affectations the accomplished page only when all the capacity are accustomed from the server. So it is appropriate to use iframes or javasoftware for adventurent loading.* Never use assorted/alike entries of the CSS file in the html page. Internet charlatan tends to amount anniversary CSS book as a abstracted access and applies on the complete page!Bottomline Your queries and schema accomplish the arrangement apatheticer! Fix them first and then accusation the abstractsbase,doudoune moncler!See Also* High accomplishance Mysql* Query achievement* Explain Query* Optimiback-bite Queries* InnoDB Tuning* Tuning MysqlCategories: blazewall Analyzer | Peranatomyance Tips This page was last adapted 18:00, 31 Auaccess 2005.-Racobweb- 相关的主题文章:

“Black Watch,” directed by John Tiffany, premiered at the Edinburgh Festival Fringe in 2006 to critical acclaim.
anddnahuem 31.12.2011 0 287
Publicidad

Bloque HTML
Comentarios
Ordenar por: 
Por página:
 
  • Aún no hay comentarios
Información de Entrada
31.12.2011 (4711 días)
Publicidad

 

 

 

Calificar
0 votos
Recomendar
Acciones
Categorías
Baby Blogs (1 publicaciones)
Blogging for Money (1 publicaciones)
Books (2 publicaciones)
City Blogs (2 publicaciones)
Dating and Personals (2 publicaciones)
Entertainment Blogs (1 publicaciones)
Games (2 publicaciones)
Health (3 publicaciones)
Holidays (2 publicaciones)
Lifestyle (3 publicaciones)
Music (4 publicaciones)
Tech News (26 publicaciones)
Videos (16 publicaciones)