首页 数据库 mysql教程 How to use MongoDB as a pure in-memory DB (Redis s

How to use MongoDB as a pure in-memory DB (Redis s

Jun 07, 2016 pm 04:38 PM
mongodb use

原po The idea There has been a growing interest in using MongoDB as an in-memory database, meaning that the data is not stored on disk at all. This can be super useful for applications like: a write-heavy cache in front of a slower RDBMS s

原po

The idea

There has been a growing interest in using MongoDB as an in-memory database, meaning that the data is not stored on disk at all. This can be super useful for applications like:

  • a write-heavy cache in front of a slower RDBMS system
  • embedded systems
  • PCI compliant systems where no data should be persisted
  • unit testing where the database should be light and easily cleaned

That would be really neat indeed if it was possible: one could leverage the advanced querying / indexing capabilities of MongoDB without hitting the disk. As you probably know the disk IO (especially random) is the system bottleneck in 99% of cases, and if you are writing data you cannot avoid hitting the disk.

One sweet design choice of MongoDB is that it uses memory-mapped files to handle access to data files on disk. This means that MongoDB does not know the difference between RAM and disk, it just accesses bytes at offsets in giant arrays representing files and the OS takes care of the rest! It is this design decision that allows MongoDB to run in RAM with no modification.

How it is done

This is all achieved by using a special type of filesystem called tmpfs. Linux will make it appear as a regular FS but it is entirely located in RAM (unless it is larger than RAM in which case it can swap, which can be useful!). I have 32GB RAM on this server, let’s create a 16GB tmpfs:

<code># mkdir /ramdata
# mount -t tmpfs -o size=16000M tmpfs /ramdata/
# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvde1             5905712   4973924    871792  86% /
none                  15344936         0  15344936   0% /dev/shm
tmpfs                 16384000         0  16384000   0% /ramdata</code>
登录后复制

Now let’s start MongoDB with the appropriate settings. smallfiles and noprealloc should be used to reduce the amount of RAM wasted, and will not affect performance since it’s all RAM based. nojournal should be used since it does not make sense to have a journal in this context!

<code>dbpath=/ramdata
nojournal = true
smallFiles = true
noprealloc = true</code>
登录后复制

After starting MongoDB, you will find that it works just fine and the files are as expected in the FS:

<code># mongo
MongoDB shell version: 2.3.2
connecting to: test
> db.test.insert({a:1})
> db.test.find()
{ "_id" : ObjectId("51802115eafa5d80b5d2c145"), "a" : 1 }
# ls -l /ramdata/
total 65684
-rw-------. 1 root root 16777216 Apr 30 15:52 local.0
-rw-------. 1 root root 16777216 Apr 30 15:52 local.ns
-rwxr-xr-x. 1 root root        5 Apr 30 15:52 mongod.lock
-rw-------. 1 root root 16777216 Apr 30 15:52 test.0
-rw-------. 1 root root 16777216 Apr 30 15:52 test.ns
drwxr-xr-x. 2 root root       40 Apr 30 15:52 _tmp</code>
登录后复制

Now let’s add some data and make sure it behaves properly. We will create a 1KB document and add 4 million of them:

<code>> str = ""
> aaa = "aaaaaaaaaa"
aaaaaaaaaa
> for (var i = 0; i  for (var i = 0; i  db.foo.stats()
{
        "ns" : "test.foo",
        "count" : 4000000,
        "size" : 4544000160,
        "avgObjSize" : 1136.00004,
        "storageSize" : 5030768544,
        "numExtents" : 26,
        "nindexes" : 1,
        "lastExtentSize" : 536600560,
        "paddingFactor" : 1,
        "systemFlags" : 1,
        "userFlags" : 0,
        "totalIndexSize" : 129794000,
        "indexSizes" : {
                "_id_" : 129794000
        },
        "ok" : 1
}</code>
登录后复制

The document average size is 1136 bytes and it takes up about 5GB of storage. The index on _id takes about 130MB. Now we need to verify something very important: is the data duplicated in RAM, existing both within MongoDB and the filesystem? Remember that MongoDB does not buffer any data within its own process, instead data is cached in the FS cache. Let’s drop the FS cache and see what is in RAM:

<code># echo 3 > /proc/sys/vm/drop_caches 
# free
             total       used       free     shared    buffers     cached
Mem:      30689876    6292780   24397096          0       1044    5817368
-/+ buffers/cache:     474368   30215508
Swap:            0          0          0</code>
登录后复制

As you can see there is 6.3GB of used RAM of which 5.8GB is in FS cache (buffers). Why is there still 5.8GB of FS cache even after all caches were dropped?? The reason is that Linux is smart and it does not duplicate the pages between tmpfs and its cache… Bingo! That means your data exists with a single copy in RAM. Let’s access all documents and verify RAM usage is unchanged:

<code>> db.foo.find().itcount()
4000000
# free
             total       used       free     shared    buffers     cached
Mem:      30689876    6327988   24361888          0       1324    5818012
-/+ buffers/cache:     508652   30181224
Swap:            0          0          0
# ls -l /ramdata/
total 5808780
-rw-------. 1 root root  16777216 Apr 30 15:52 local.0
-rw-------. 1 root root  16777216 Apr 30 15:52 local.ns
-rwxr-xr-x. 1 root root         5 Apr 30 15:52 mongod.lock
-rw-------. 1 root root  16777216 Apr 30 16:00 test.0
-rw-------. 1 root root  33554432 Apr 30 16:00 test.1
-rw-------. 1 root root 536608768 Apr 30 16:02 test.10
-rw-------. 1 root root 536608768 Apr 30 16:03 test.11
-rw-------. 1 root root 536608768 Apr 30 16:03 test.12
-rw-------. 1 root root 536608768 Apr 30 16:04 test.13
-rw-------. 1 root root 536608768 Apr 30 16:04 test.14
-rw-------. 1 root root  67108864 Apr 30 16:00 test.2
-rw-------. 1 root root 134217728 Apr 30 16:00 test.3
-rw-------. 1 root root 268435456 Apr 30 16:00 test.4
-rw-------. 1 root root 536608768 Apr 30 16:01 test.5
-rw-------. 1 root root 536608768 Apr 30 16:01 test.6
-rw-------. 1 root root 536608768 Apr 30 16:04 test.7
-rw-------. 1 root root 536608768 Apr 30 16:03 test.8
-rw-------. 1 root root 536608768 Apr 30 16:02 test.9
-rw-------. 1 root root  16777216 Apr 30 15:52 test.ns
drwxr-xr-x. 2 root root        40 Apr 30 16:04 _tmp
# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvde1             5905712   4973960    871756  86% /
none                  15344936         0  15344936   0% /dev/shm
tmpfs                 16384000   5808780  10575220  36% /ramdata</code>
登录后复制

And that verifies it! :)

What about replication?

You probably want to use replication since a server loses its RAM data upon reboot! Using a standard replica set you will get automatic failover and more read capacity. If a server is rebooted MongoDB will automatically rebuild its data by pulling it from another server in the same replica set (resync). This should be fast enough even in cases with a lot of data and indices since all operations are RAM only :)

It is important to remember that write operations get written to a special collection called oplog which resides in the local database and takes 5% of the volume by default. In my case the oplog would take 5% of 16GB which is 800MB. In doubt, it is safer to choose a fixed oplog size using the oplogSize option. If a secondary server is down for a longer time than the oplog contains, it will have to be resynced. To set it to 1GB, use:

<code>oplogSize = 1000</code>
登录后复制

What about sharding?

Now that you have all the querying capabilities of MongoDB, what if you want to implement a large service with it? Well you can use sharding freely to implement a large scalable in-memory store. Still the config servers (that contain the chunk distribution) should be disk based since their activity is small and rebuilding a cluster from scratch is not fun.

What to watch for

RAM is a scarce resource, and in this case you definitely want the entire data set to fit in RAM. Even though tmpfs can resort to swapping the performance would drop dramatically. To make best use of the RAM you should consider:

  • usePowerOf2Sizes option to normalize the storage buckets
  • run a compact command or resync the node periodically.
  • use a schema design that is fairly normalized (avoid large document growth)

Conclusion

Sweet, you can now use MongoDB and all its features as an in-memory RAM-only store! Its performance should be pretty impressive: during the test with a single thread / core I was achieving 20k writes per second, and it should scale linearly over the number of cores.

本站声明
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn

热AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智能驱动的应用程序,用于创建逼真的裸体照片

AI Clothes Remover

AI Clothes Remover

用于从照片中去除衣服的在线人工智能工具。

Undress AI Tool

Undress AI Tool

免费脱衣服图片

Clothoff.io

Clothoff.io

AI脱衣机

Video Face Swap

Video Face Swap

使用我们完全免费的人工智能换脸工具轻松在任何视频中换脸!

热门文章

<🎜>:泡泡胶模拟器无穷大 - 如何获取和使用皇家钥匙
3 周前 By 尊渡假赌尊渡假赌尊渡假赌
Mandragora:巫婆树的耳语 - 如何解锁抓钩
3 周前 By 尊渡假赌尊渡假赌尊渡假赌
北端:融合系统,解释
3 周前 By 尊渡假赌尊渡假赌尊渡假赌

热工具

记事本++7.3.1

记事本++7.3.1

好用且免费的代码编辑器

SublimeText3汉化版

SublimeText3汉化版

中文版,非常好用

禅工作室 13.0.1

禅工作室 13.0.1

功能强大的PHP集成开发环境

Dreamweaver CS6

Dreamweaver CS6

视觉化网页开发工具

SublimeText3 Mac版

SublimeText3 Mac版

神级代码编辑软件(SublimeText3)

热门话题

Java教程
1668
14
CakePHP 教程
1426
52
Laravel 教程
1329
25
PHP教程
1273
29
C# 教程
1256
24
使用 Composer 解决推荐系统的困境:andres-montanez/recommendations-bundle 的实践 使用 Composer 解决推荐系统的困境:andres-montanez/recommendations-bundle 的实践 Apr 18, 2025 am 11:48 AM

在开发一个电商网站时,我遇到了一个棘手的问题:如何为用户提供个性化的商品推荐。最初,我尝试了一些简单的推荐算法,但效果并不理想,用户的满意度也因此受到影响。为了提升推荐系统的精度和效率,我决定采用更专业的解决方案。最终,我通过Composer安装了andres-montanez/recommendations-bundle,这不仅解决了我的问题,还大大提升了推荐系统的性能。可以通过一下地址学习composer:学习地址

Navicat查看MongoDB数据库密码的方法 Navicat查看MongoDB数据库密码的方法 Apr 08, 2025 pm 09:39 PM

直接通过 Navicat 查看 MongoDB 密码是不可能的,因为它以哈希值形式存储。取回丢失密码的方法:1. 重置密码;2. 检查配置文件(可能包含哈希值);3. 检查代码(可能硬编码密码)。

CentOS上GitLab的数据库如何选择 CentOS上GitLab的数据库如何选择 Apr 14, 2025 pm 04:48 PM

CentOS系统上GitLab数据库部署指南选择合适的数据库是成功部署GitLab的关键步骤。GitLab兼容多种数据库,包括MySQL、PostgreSQL和MongoDB。本文将详细介绍如何选择并配置这些数据库。数据库选择建议MySQL:一款广泛应用的关系型数据库管理系统(RDBMS),性能稳定,适用于大多数GitLab部署场景。PostgreSQL:功能强大的开源RDBMS,支持复杂查询和高级特性,适合处理大型数据集。MongoDB:流行的NoSQL数据库,擅长处理海

CentOS MongoDB备份策略是什么 CentOS MongoDB备份策略是什么 Apr 14, 2025 pm 04:51 PM

CentOS系统下MongoDB高效备份策略详解本文将详细介绍在CentOS系统上实施MongoDB备份的多种策略,以确保数据安全和业务连续性。我们将涵盖手动备份、定时备份、自动化脚本备份以及Docker容器环境下的备份方法,并提供备份文件管理的最佳实践。手动备份:利用mongodump命令进行手动全量备份,例如:mongodump-hlocalhost:27017-u用户名-p密码-d数据库名称-o/备份目录此命令会将指定数据库的数据及元数据导出到指定的备份目录。

MongoDB 与关系数据库:全面比较 MongoDB 与关系数据库:全面比较 Apr 08, 2025 pm 06:30 PM

MongoDB与关系型数据库:深度对比本文将深入探讨NoSQL数据库MongoDB与传统关系型数据库(如MySQL和SQLServer)的差异。关系型数据库采用行和列的表格结构组织数据,而MongoDB则使用灵活的面向文档模型,更适应现代应用的需求。主要区别数据结构:关系型数据库使用预定义模式的表格存储数据,表间关系通过主键和外键建立;MongoDB使用类似JSON的BSON文档存储在集合中,每个文档结构可独立变化,实现无模式设计。架构设计:关系型数据库需要预先定义固定的模式;MongoDB支持

mongodb怎么设置用户 mongodb怎么设置用户 Apr 12, 2025 am 08:51 AM

要设置 MongoDB 用户,请按照以下步骤操作:1. 连接到服务器并创建管理员用户。2. 创建要授予用户访问权限的数据库。3. 使用 createUser 命令创建用户并指定其角色和数据库访问权限。4. 使用 getUsers 命令检查创建的用户。5. 可选地设置其他权限或授予用户对特定集合的权限。

Debian MongoDB如何进行数据加密 Debian MongoDB如何进行数据加密 Apr 12, 2025 pm 08:03 PM

在Debian系统上为MongoDB数据库加密,需要遵循以下步骤:第一步:安装MongoDB首先,确保您的Debian系统已安装MongoDB。如果没有,请参考MongoDB官方文档进行安装:https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/第二步:生成加密密钥文件创建一个包含加密密钥的文件,并设置正确的权限:ddif=/dev/urandomof=/etc/mongodb-keyfilebs=512

连接mongodb的工具有哪些 连接mongodb的工具有哪些 Apr 12, 2025 am 06:51 AM

连接MongoDB的工具主要有:1. MongoDB Shell,适用于快速查看数据和执行简单操作;2. 编程语言驱动程序(如PyMongo, MongoDB Java Driver, MongoDB Node.js Driver),适合应用开发,但需掌握其使用方法;3. GUI工具(如Robo 3T, Compass),提供图形化界面,方便初学者和快速数据查看。选择工具需考虑应用场景和技术栈,并注意连接字符串配置、权限管理及性能优化,如使用连接池和索引。

See all articles