Getting the parts


As mentioned, I’m building a new Ceph storage node out of an 8 disk NAS with an Intel Avoton C2750 Octa-Core and 32GB of ram. Of the needed parts I’ve got the U-NAS 800, two of the four SSDs (Samsung EVO 840’s 120GB for the OS), and the ASRock C2750D4I. Within the next few weeks I expect to purchase and get in the needed memory and the remaining 8 drives I will need to run the ceph-osd processes. 2 SSD journals (again Sumsung EVO 840’s, the 250GB versions) and 6 4TB disks.

I’m also sourcing out parts for upgrading the home network to 10GbE, that’s exciting.

2 years ago

I’m going to build a Ceph NAS on top of a Intel Avoton C2750 Octa-Core. Excited to see how this thing is going to hold up, I’m tempted to add a dual port 10GbE card. It’ll be a base of 32GB ECC ram, 2 120GB ssds for the OS and 2 250GB ssds (520MB/s) for journals, 6 drives for OSD. I’ll be surprised if the machine will be capable of handling the full 10GbE while using erasure coding, I may have to give up the space savings and stick with replication but I will be happy if I can get anywhere near 1GB/s.

If this machine works and benchmarks well it may become the primary platform I use for my home setup, very low power and dense enough to give me up to 24TB worth of raw storage. If I don’t go with 10GbE I can drop the 2 ssd journals and add an additional 8TB of storage making it 32TB raw.

My Home Ceph Cluster


The Hardware

3 server nodes on a DCS6005, a version of Dell’s older model C6105.

Here are the basics for the three nodes:

Dual Six-Core AMD Opteron Processors 2419 EE
48GB of DDR2 333Mhz RAM
Dual 1Gbit ports
3 spinning disks: 1 OS and 2 OSD Storage Disks

The Ceph Cluster

I am using Ubuntu 12.04.4 (Precise) on all nodes while running the stable version of Ceph Dumpling (v0.67.7).

ceph -s | grep map
monmap e7: 3 mons at {belle=,merida=,snow=}, election epoch 294, quorum 0,1,2 belle,merida,snow
osdmap e52728: 6 osds: 6 up, 6 in
pgmap v5390418: 11364 pgs: 11363 active+clean, 1 active+clean+scrubbing+deep; 4357 GB data, 8726 GB used, 8026 GB / 16752 GB avail
mdsmap e2460: 1/1/1 up {0=snow=up:active}, 2 up:standby

Disks have been purchased over time and consist of slow, low cost, high density drives. In this case each node has one 2TB and one 4TB drive.

The Goal

The original reason for this was to have a single central storage location for all large and some small media in our home. To achieve this I primarily utilize CephFS while running Samba exposing the storage space to all the Windows desktops on the network. To distribute images to friends and family I run the RadosGW. I haven’t much use for RBD at the moment but am planning on using it for VM image and volume storage.

While I have had this cluster for awhile, and the hardware has changed over time, I have never really taken the time to tune the environment and figure out the full performance I can get in this situation. In my next few posts I will describe my changes as I go by improving the performance of my cluster and it’s clients.

SSL Errors in the Memory Manager


I’ve been contacted by several users of the memory manager application regarding the following error showing up in logs:

error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed'

This is due to the ssl certificate for the API having expired, something I’m sure DH will be fixing soon. In the meantime I used the opportunity to push out the latest version of the memory manager application with a work around for ignoring the expired cert. A couple of bug fixes (IE users should show the graphs properly now) and code cleanup went into this version.

Download is found to the right, or on the project’s github page here:


4 years ago

Mostly for my personal notes, but this is useful to anyone who works with raw apache logs.

awk '{A[$1] += $10} END {for (i in A) print i ": " A[i]/1024/1024 " Mb"}' access.log | sort -k2 -n

A quick little one-liner to figure out what IP is sucking away your bandwidth.

PHP-FPM ondemand!


PHP 5.3.9 came out on Jan 10th and reading over the changelog I geeked out.

Specifically because of this line item:

Implemented FR #52569 (Add the “ondemand” process-manager to allow zero children).

Go check out that bug and the related patch. It’s going to be amazing to see shared hosting services take advantage of PHP-FPM with the ondemand process manager option. In the past, when you started PHP w/FPM, you had two options dynamic or static, both of which created a pool of php processes at startup. Using ondemand allows for the php children processes to get started when there is a request that needs to be handled.

I just switched my server over to a custom php install using this feature, here was my memory usage before enabling it:

free -m
             total       used       free     shared    buffers     cached
Mem:          1358       1242        115          0          0        342
-/+ buffers/cache:        900        457
Swap:            0          0          0

Immediately after implementation this is my memory usage:

free -m
             total       used       free     shared    buffers     cached
Mem:          1708        518       1189          0          0        491
-/+ buffers/cache:         26       1681
Swap:            0          0          0

I should note that my memory manager increased the available memory on my machine while it was compiling PHP.


Memory Usage on DreamHost Apache configuration Vs. Nginx Configuration


Please note that all I’m trying to point out here is the difference in stable memory usage by these two configurations. It should also be noted that the main cause of these differences is (for most people) going to be how PHP is spawned and handled by the two configurations.

Apache on the DreamHost configuration will (by default) use FastCGI for PHP, this will spawn processes as needed (and end them over time) which can lead to large spikes in memory usage. Predicting memory usage in this setup is difficult, to say the least.

Memory Usage of DH's Apache Configuration

Nginx on the DreamHost configuration is a bit easier on the eyes which you can see from the graph below. It should be noted that I run several more sites on my Nginx configuration then I do on the Apache configuration and because of this I run a much larger pool of PHP processes which increases the overall memory. If both servers were running the same set of sites with the only difference in configuration and applications being the selection of Nginx instead of Apache, memory usage would be lower and still predictable.

Memory Usage of DH's Nginx Configuration

JJ’s VPS Memory Manager v1.1 – Released


Check out the download links on the side bar to the right. You’ll now see that v1.1 of my memory manager has been released. Since I’ve got to start my shift soon I’m just going to give a quick summary of the new features, starting with the most important:

1) Process list dumped to log on resize (date, used memory, suggested memory, and cache memory are all listed as well)
2) Ability to send email on resize request
3) Ability to tweet (uses oauth) on resize request

Check out the readme file for information on installation, upgrades, and configuration.

As always please don’t hesitate to contact me if you need help or have any questions!

Memory Manager v1.1 is Coming Soon


First off, thanks to everyone who has been actively testing and using this application and contacting me with feature requests and questions. The latest version of the memory manager (v1.1) will be coming out soon and with it will come, what I consider to be, an essential feature. With every resize there will be a dump of all running processes sorted by memory usage.

What does that mean to you?

Chances are you’re using this memory manager to help figure out where your memory is going and keep your services up and running. Having a snapshot of your processes, taken when the script sees the need for a resize, will tell you where your memory is actually being used. Very soon you’ll be able to review your logs and click on the “process log” link. I’ll be posting an update to the news you see within the memory manager when the newest version is ready.

JJ’s VPS Memory Manager v1.0 Release


I am very pleased to announce that today version 1.0 of JJ’s VPS Memory Manager has now been released. This version is considered stable and all reported bugs have been corrected. Go ahead and download version 1.0 (zip or tar.gz are available) and give it a try, if you run into any problems or have questions shoot me an email:

More screenshots after the jump:


Go to Top