Mongodb之insert瞬时完成,带测试数据---飞天博客
这几天在看mongdb官网,然后经运行插入数据,突然发现这个数据确实很强大,这里用数据说话,我用的普通pc机,amd的双核 2.7GHz,4G内存,当然在测试的时候系统不是只作为数据库服务器的,还运行了myeclipse,qq,杀毒软件什么的,当然在测试的时候cpu大约是9
这几天在看mongdb官网,然后经运行插入数据,突然发现这个数据确实很强大,这里用数据说话,我用的普通pc机,amd的双核 2.7GHz,4G内存,当然在测试的时候系统不是只作为数据库服务器的,还运行了myeclipse,qq,杀毒软件什么的,当然在测试的时候cpu大约是95%的负载
具体什么结果呢?
刚开始测的时候,没注意,一下启动了10个线程,每个线程插10000条数据,结果,系统不堪负载,断电黑屏了,这电脑有点问题。这下囧了;
好吧,然后在慢慢的摸索之下,插入10000条数据,改为插入1000条数据,10个线程改为100个线程,测试结果如下:mongodb 100个线程,每个线程insert 1000次,76760 ms 一分10s的样子。关系型数据库还是很厉害的
然后我又具体测试了一下500个线程,每个线程同时插入10条记录,请看下面记录:下面的格式为:线程id:线程完成时间;总共结束时间:5542
262:645 | 173:4896 | 388:5057 | 57:5351 |
460:561 | 222:4874 | 102:5230 | 149:5387 |
18:2183 | 298:4752 | 109:5233 | 384:5179 |
456:2105 | 301:4768 | 386:5059 | 396:5181 |
46:3060 | 176:4895 | 289:5126 | 32:5352 |
412:2851 | 189:4918 | 268:5132 | 437:4935 |
448:2909 | 133:4933 | 209:5259 | 444:4935 |
462:3363 | 231:4919 | 140:5259 | 101:5357 |
425:3423 | 91:4900 | 348:5116 | 459:4939 |
461:3417 | 72:4898 | 249:5260 | 418:4938 |
458:3507 | 365:4748 | 73:5239 | 451:4940 |
450:3544 | 221:4941 | 389:5074 | 147:5399 |
452:3585 | 99:4927 | 148:5265 | 142:5398 |
428:3662 | 299:4816 | 152:5275 | 404:5193 |
454:3735 | 241:4951 | 286:5119 | 166:5396 |
423:3820 | 243:4967 | 28:5242 | 51:5366 |
259:4270 | 119:4969 | 45:5246 | 202:5367 |
349:4139 | 19:5002 | 107:5249 | 489:4632 |
421:3872 | 363:4792 | 247:5269 | 484:4632 |
188:4285 | 223:4973 | 103:5249 | 442:4948 |
96:4284 | 383:4801 | 245:5271 | 435:4949 |
414:4049 | 25:5023 | 391:5086 | 491:4637 |
426:3881 | 227:4998 | 86:5250 | 495:4637 |
424:3945 | 371:4822 | 83:5252 | 497:4638 |
416:4122 | 225:5008 | 160:5283 | 143:5413 |
33:4384 | 367:4831 | 354:5101 | 499:4641 |
379:4228 | 52:4990 | 116:5253 | 145:5417 |
420:3983 | 179:5019 | 50:5253 | 493:4644 |
310:4275 | 422:4605 | 382:5082 | 480:4646 |
131:4428 | 256:5003 | 90:5256 | 482:4647 |
201:4427 | 121:5070 | 29:5257 | 485:4647 |
67:4410 | 123:5074 | 380:5085 | 487:4648 |
203:4496 | 261:5054 | 242:5259 | 93:5385 |
252:4550 | 39:5057 | 43:5260 | 138:5421 |
308:4477 | 233:5092 | 78:5260 | 483:4653 |
343:4493 | 35:5076 | 127:5287 | 481:4654 |
403:4448 | 417:4723 | 95:5265 | 157:5425 |
3:4685 | 430:4661 | 385:5103 | 465:4659 |
306:4495 | 237:5109 | 183:5290 | 406:5217 |
401:4456 | 125:5111 | 251:5288 | 469:4662 |
230:4626 | 253:5111 | 376:5093 | 471:4664 |
419:4225 | 229:5120 | 64:5268 | 476:4657 |
110:4627 | 239:5120 | 378:5097 | 477:4660 |
94:4640 | 235:5126 | 248:5256 | 467:4664 |
38:4639 | 36:5105 | 270:5168 | 479:4661 |
405:4483 | 273:5006 | 269:5171 | 478:4660 |
126:4659 | 272:4987 | 295:5165 | 473:4670 |
294:4541 | 360:4958 | 181:5298 | 463:4672 |
364:4508 | 192:5122 | 337:5162 | 472:4661 |
134:4681 | 362:4963 | 361:5120 | 408:5228 |
320:4557 | 113:5125 | 198:5278 | 470:4664 |
284:4560 | 62:5126 | 236:5280 | 468:4664 |
359:4538 | 150:5155 | 207:5303 | 79:5404 |
357:4540 | 117:5135 | 373:5120 | 141:5441 |
358:4541 | 7:5200 | 41:5281 | 474:4674 |
296:4575 | 88:5141 | 372:5107 | 429:4986 |
53:4701 | 5:5208 | 10:5351 | 466:4670 |
13:4772 | 114:5143 | 400:5110 | 56:5407 |
54:4708 | 328:5019 | 58:5282 | 431:4987 |
266:4614 | 55:5146 | 312:5160 | 427:4989 |
297:4610 | 283:5040 | 14:5359 | 486:4673 |
255:4741 | 282:5021 | 395:5127 | 475:4684 |
197:4743 | 70:5146 | 369:5132 | 488:4672 |
300:4596 | 316:5024 | 65:5292 | 159:5449 |
375:4563 | 351:5028 | 84:5293 | 492:4675 |
195:4749 | 274:5025 | 156:5326 | 494:4674 |
200:4731 | 254:5138 | 263:5294 | 410:5242 |
69:4731 | 49:5155 | 196:5297 | 490:4677 |
292:4607 | 279:5049 | 144:5330 | 139:5453 |
120:4739 | 336:5033 | 210:5299 | 409:5252 |
311:4628 | 212:5160 | 158:5329 | 433:4997 |
40:4739 | 11:5224 | 281:5194 | 464:4681 |
98:4742 | 250:5146 | 129:5324 | 498:4679 |
37:4744 | 97:5165 | 169:5337 | 208:5422 |
66:4742 | 399:5003 | 162:5332 | 496:4681 |
213:4769 | 74:5171 | 370:5128 | 12:5492 |
106:4748 | 285:5065 | 353:5182 | 500:4677 |
215:4774 | 187:5197 | 394:5131 | 76:5423 |
171:4777 | 340:5050 | 167:5341 | 180:5426 |
350:4632 | 124:5175 | 455:4885 | 333:5313 |
63:4755 | 112:5178 | 185:5331 | 318:5302 |
104:4755 | 47:5180 | 368:5136 | 322:5302 |
288:4630 | 194:5182 | 151:5345 | 218:5428 |
9:4822 | 352:5060 | 163:5348 | 228:5428 |
303:4646 | 75:5183 | 118:5313 | 130:5428 |
68:4757 | 100:5183 | 153:5349 | 238:5431 |
87:4765 | 34:5183 | 338:5190 | 204:5432 |
214:4766 | 332:5062 | 324:5190 | 111:5434 |
366:4591 | 271:5083 | 154:5347 | 27:5434 |
122:4766 | 71:5193 | 377:5156 | 6:5504 |
217:4795 | 258:5179 | 390:5145 | 60:5434 |
244:4784 | 246:5180 | 265:5216 | 182:5438 |
240:4786 | 184:5197 | 4:5389 | 8:5508 |
44:4788 | 161:5218 | 164:5349 | 26:5437 |
287:4703 | 307:5084 | 267:5217 | 331:5327 |
339:4701 | 309:5083 | 1:5394 | 42:5440 |
407:4650 | 15:5263 | 155:5357 | 313:5330 |
193:4839 | 132:5198 | 330:5200 | 24:5439 |
260:4810 | 277:5093 | 342:5200 | 92:5443 |
186:4821 | 257:5220 | 436:4903 | 329:5331 |
128:4819 | 135:5243 | 453:4906 | 345:5323 |
341:4706 | 30:5199 | 447:4908 | 290:5320 |
178:4823 | 397:5037 | 172:5360 | 234:5447 |
335:4709 | 314:5080 | 432:4909 | 327:5334 |
146:4842 | 326:5080 | 445:4912 | 278:5322 |
305:4717 | 80:5205 | 443:4912 | 276:5323 |
226:4831 | 85:5207 | 168:5364 | 325:5339 |
302:4707 | 211:5231 | 402:5161 | 356:5296 |
220:4837 | 77:5210 | 174:5360 | 319:5343 |
224:4837 | 31:5210 | 434:4913 | 381:5292 |
199:4861 | 293:5100 | 446:4916 | 264:5450 |
347:4718 | 22:5210 | 441:4918 | 89:5457 |
219:4865 | 280:5087 | 438:4918 | 108:5457 |
190:4844 | 20:5211 | 457:4920 | 232:5458 |
355:4692 | 315:5108 | 439:4921 | 82:5458 |
115:4848 | 137:5260 | 440:4920 | 16:5514 |
411:4628 | 393:5058 | 398:5172 | 136:5500 |
17:4903 | 48:5221 | 449:4923 | 346:5338 |
413:4637 | 334:5100 | 2:5416 | 291:5352 |
177:4884 | 392:5052 | 21:5400 | 323:5351 |
415:4639 | 206:5226 | 374:5174 | 321:5351 |
61:4864 | 105:5227 | 387:5184 | 344:5340 |
304:4739 | 216:5228 | 23:5402 | 317:5352 |
175:4889 | 59:5228 | 165:5385 | 191:5489 |
275:4764 | 205:5252 | 170:5380 | 81:5467 |
我这台机器上最大连接数为:500,所以没测试更多,但是看到这上面的效率是很好的啦,前面的线程几乎是秒插入进去。如果要做个500个人同时注册,单机mongodb就是小case。当然加大它的连接数。注意,这是连接数,不是MongoClient的个数,默认实例化mongClient的个数是为1的
从后台可以看到:db.mydb.count() 结果为:5000,说明这次测试结果全部没有出差错,速度之快,当然不用说,也查了一下原因,但和我做的实验实际还是有点出入:
资料如下:
但mongodb的这种操作,客户端将文档发送给服务器之后就like干别的了。客户端也收不到“好的,知道了”或者“有问题,能重新传送一遍嘛?”这类响应。这个特点的有点很明显,速度块,这些操作都会非常块的;但服务器如果出问题,客户端不知道,比如服务器奔溃,断电什么的,客户端还是会继续发送写操作。
但是我下的这个mongodb,还是有出入的,因为当我关闭服务器,客户端报错了!!!证明服务器出问题还是会被检测到。至于发送写操作能否验证是否成功,这个问题,java驱动会返回一个WriterResult结果,里面可以返回最近一次出错信息,想必这个版本默认就设置了“安全操作”,也就是说,插入操作会返回信息,待用户判断是否将上一次为插入成功的数据再一次插入进数据库。“安全操作”在我看来还是很快的,如果将“安全操作”设置为“非安全操作”版,插入速度应该会更快
这里附上mongodb的java驱动之crud
public class MongoTest { public static void main(String[] args) throws UnknownHostException { MongoClient mongoClient = new MongoClient( "localhost" , 30000 ); DB db = mongoClient.getDB( "mytest" ); // 返回当前的数据库名称 // Set<String> colls = db.getCollectionNames(); // // for (String s : colls) { // System.out.println(s); // } DBCollection collection=db.getCollection("test"); BasicDBObject doc = new BasicDBObject("name", "xiaohua2"); collection.insert(doc); System.out.println(collection.count()); mongoClient.close(); /* //得到一个集合,这个集合就是用来做crud的接口 DBCollection coll = db.getCollection("mydb"); //插入一个document,和sql的表差不多 //The _id element has been added automatically by MongoDB to your document. //Remember, MongoDB reserves element names that start with “_”/”$” for internal use BasicDBObject doc = new BasicDBObject("name", "MongoDB") .append("type", "database") .append("count", 1) .append("info", new BasicDBObject("x", 203).append("y", 102)); coll.insert(doc); //得到第一条document DBObject myDoc = coll.findOne(); System.out.println(myDoc); //多条数据插入 for (int i=0; i < 100; i++) { coll.insert(new BasicDBObject("i", i)); } //统计document的行数 System.out.println(coll.getCount()); //使用游标 DBCursor cursor = coll.find(); try { while(cursor.hasNext()) { System.out.println(cursor.next()); } } finally { cursor.close(); } mongoClient.close(); //查询 // BasicDBObject query = new BasicDBObject("i", 71); // cursor = coll.find(query); // // try { // while(cursor.hasNext()) { // System.out.println(cursor.next()); // } // } finally { // cursor.close(); // } */ } }
注意记得将mongoclient close掉
备注,如果转载:请标明出处blog.csdn.net/xh199110 飞天博客
如果有写的不对的地方,欢迎指正。作者也是看官网,查资料,加上自己的理解,写了这篇文章,以便大家一起来学习,谢谢

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











This article introduces how to configure MongoDB on Debian system to achieve automatic expansion. The main steps include setting up the MongoDB replica set and disk space monitoring. 1. MongoDB installation First, make sure that MongoDB is installed on the Debian system. Install using the following command: sudoaptupdatesudoaptinstall-ymongodb-org 2. Configuring MongoDB replica set MongoDB replica set ensures high availability and data redundancy, which is the basis for achieving automatic capacity expansion. Start MongoDB service: sudosystemctlstartmongodsudosys

When developing an e-commerce website, I encountered a difficult problem: how to provide users with personalized product recommendations. Initially, I tried some simple recommendation algorithms, but the results were not ideal, and user satisfaction was also affected. In order to improve the accuracy and efficiency of the recommendation system, I decided to adopt a more professional solution. Finally, I installed andres-montanez/recommendations-bundle through Composer, which not only solved my problem, but also greatly improved the performance of the recommendation system. You can learn composer through the following address:

This article describes how to build a highly available MongoDB database on a Debian system. We will explore multiple ways to ensure data security and services continue to operate. Key strategy: ReplicaSet: ReplicaSet: Use replicasets to achieve data redundancy and automatic failover. When a master node fails, the replica set will automatically elect a new master node to ensure the continuous availability of the service. Data backup and recovery: Regularly use the mongodump command to backup the database and formulate effective recovery strategies to deal with the risk of data loss. Monitoring and Alarms: Deploy monitoring tools (such as Prometheus, Grafana) to monitor the running status of MongoDB in real time, and

It is impossible to view MongoDB password directly through Navicat because it is stored as hash values. How to retrieve lost passwords: 1. Reset passwords; 2. Check configuration files (may contain hash values); 3. Check codes (may hardcode passwords).

Detailed explanation of MongoDB efficient backup strategy under CentOS system This article will introduce in detail the various strategies for implementing MongoDB backup on CentOS system to ensure data security and business continuity. We will cover manual backups, timed backups, automated script backups, and backup methods in Docker container environments, and provide best practices for backup file management. Manual backup: Use the mongodump command to perform manual full backup, for example: mongodump-hlocalhost:27017-u username-p password-d database name-o/backup directory This command will export the data and metadata of the specified database to the specified backup directory.

Encrypting MongoDB database on a Debian system requires following the following steps: Step 1: Install MongoDB First, make sure your Debian system has MongoDB installed. If not, please refer to the official MongoDB document for installation: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/Step 2: Generate the encryption key file Create a file containing the encryption key and set the correct permissions: ddif=/dev/urandomof=/etc/mongodb-keyfilebs=512

GitLab Database Deployment Guide on CentOS System Selecting the right database is a key step in successfully deploying GitLab. GitLab is compatible with a variety of databases, including MySQL, PostgreSQL, and MongoDB. This article will explain in detail how to select and configure these databases. Database selection recommendation MySQL: a widely used relational database management system (RDBMS), with stable performance and suitable for most GitLab deployment scenarios. PostgreSQL: Powerful open source RDBMS, supports complex queries and advanced features, suitable for handling large data sets. MongoDB: Popular NoSQL database, good at handling sea

To set up a MongoDB user, follow these steps: 1. Connect to the server and create an administrator user. 2. Create a database to grant users access. 3. Use the createUser command to create a user and specify their role and database access rights. 4. Use the getUsers command to check the created user. 5. Optionally set other permissions or grant users permissions to a specific collection.
