Raspberry Pi Ceph Cluster – Testing Part 2

Having built a Ceph cluster with three Raspberry Pi 3 B+s and unsuccessfully tested it with RBD, it’s time to try CephFS and Rados Gateway. The cluster setup post is here and the test setup and RBD results post is here. Given how poorly RBD performed, I’m expecting similar poor performance from CephFS. Using the object storage gateway might work better, but I don’t have high hopes for the cluster staying stable under even small loads in either test.

Test Setup:

I’m using the same test setup as I used in the RBD tests. Two pools, one using 2x replication and the other using erasure coding. Test client is an Arch Linux system running a 5.1.14 kernel with Ceph 13.2.1, a quad core CPU, 16GiB of RAM and connected via a 1GbE connection to the cluster. I’m also running the OSDs with their cache limited to 256MiB maximum size, and the metadata cache limited to 128MiB.

CephFS:

CephFS requires a metadata pool, so I created a replicated pool for metadata. Why not create both at once? CephFS doesn’t currently support multiple filesystems on the same cluster, though there is experimental support for it.

With the pool created and the CephFS filesystem mounted on my test client, I started the dd write test with a 12GiB file using dd if=/dev/zero of=/mnt/test/test.dd bs=4M count=3072. The first run completed almost instantly, apparently completely fitting in the filesystem cache. CephFS doesn’t support iflag=direct in dd, so I simply reran the write test knowing that the cache was pretty full at this point. Almost instantly, with less than 1GiB into the test, two of the nodes simultaneously fell over and died. They were completely unreachable over the network, but this time I connected a HDMI monitor to them to see the console. I saw quickly that kthread had been blocked for over 120 seconds, and the system was pretty completely unusable. A USB keyboard was recognized, but key-presses weren’t registering. I power-cycled the systems after waiting at least five minutes, and they came up fine.

A very unhappy Ceph cluster with two hosts that have locked up.

I tried running the test again, and both hosts quickly locked up. I was running top as they did so, and both hosts rapidly consumed their memory. Despite having the OSD’s cache set to a maximum of 256MiB, the ceph-osd process was using around 750MiB before the system became unresponsive. OOM killer didn’t kill cpeh-osd in this case to save the system, possibly due to the kernel failing to allocate memory internally to do so. CephFS seems to hang the Raspberry Pis hard.

I decided to test a bunch of smaller files, because CephFS is meant more as a general filesystem with a bunch of files, whereas RBD tends to get used to store large VM images. I used sysbench to create 1024 16MiB files for a total of 16GiB on my 40GiB CephFS filesystem. Initially, things seemed to work fine. Sysbench reported that it created the 1024 files at 6.4MB/s. While this was just test preparation, it seemed to be a good sign.

What didn’t seem such a good sign was when I actually started running the sysbench write test and Ceph started complaining about slow MDS ops. A lot of them. The sysbench write test immediately failed, citing an IO error on the file. Running ls -la showed a lot of 0 bytes file, with a couple 16MiB files. Ugh. I recreated the test setup, this time with writes at a blazing 540kB/s. When it finally finished several hours later, attempting to run the write tests showed the same truncation of files to 0B as before. This seemed to be a sysbench issue, but I didn’t spend much time troubleshooting it.

For completeness, I also tried an erasure coded pool with CephFS. Like RBD, the metadata pool isn’t supported on erasure coded pools, and needs to be on a replicated pool. Results initially looked better, but the OSDs still exhausted their memory and caused host freezes, though after a longer time with more data successfully ingested.

RadosGW:

I had intended to test RadosGW with the S3 API, but I decided against it. With two different failed tests, the chances for any test results that didn’t end with the cluster dying are pretty low.

The final cluster hardware setup, with three nodes mounted in a nice stack with shorter (and tidier) network cables. Blue cased Pi isn’t part of the cluster.

Conclusions:

The Raspberry Pi 3 B+ doesn’t have enough RAM to run Ceph. While everything technically works, any attempts at sustained data transfer to or from the cluster fail. It seems like a Raspberry Pi, or other SBC, with 2GB+ of RAM would actually be stable, but still slow. The RAM issue is likely exacerbated by the 900kB/s random write rate the flash keys are capable of, but I don’t have faster flash keys or spare USB hard drives to test with.

Erasure coding seems to be better on RAM limited systems, and while it still failed, it always failed later, with more data successfully written. While it may have been more taxing on the Raspberry Pi’s limited CPU resources, these resources were typically in low contention, with usage averaging around 25% under maximum load across all 4 cores.

The release of the Raspberry Pi 4 B happened while I was writing this series of blog posts. I’d love to re-rerun these tests on three or four of the 4GB models with actual storage drives. The extra RAM should keep the OSDs from running out of memory and dying/taking down the whole system, and the USB3.0 interconnect means that storage and network access will be considerably faster. They might be good enough to run a small, yet stable cluster, and I look forward to testing on them soon.

Raspberry Pi Ceph Cluster – Testing Part 1

It’s time to run some tests on the Raspberry Pi Ceph cluster I built. I’m not sure if it’ll be stable enough to actually test, but I’d like to find out and try to tune things if needed.

Pool Creation:

I want to test both standard replicated pools, and Ceph’s newer erasure coded pools. I configured Ceph’s replicated pools with 2 replicas. The erasure coded pools are more like RAID in that there aren’t N replicas spread across the cluster, but rather data is split into chunks and distributed, checksummed, and then the data and checksums are spread across the pool. I configured the erasure coded pool with data split into 2 with an additional coding chunk. Practically, this means that they should tolerate the same failures, but with 1.5x the overhead instead of 2x the overhead.

I created one 16GiB pool of each type to test. Why not always use erasure coded pools? They’re more computationally complex, which might be bad on compute constrained devices such as the Raspberry Pi. They also don’t support the full set of operations. For example, RBD can’t completely reside on an erasure coded pool. There is a workaround, the metadata resides on a replicated pool with the data on the erasure coded pool.

Baseline Tests:

To get a basic idea for what performance level I could expect from the hardware underlying Ceph, I ran network and storage performance tests.

I tested network throughput with iperf between each Raspberry Pi 3 B+ and an external computer connected over Gigabit Ethernet. Each got around 250Mb/s, which is reasonable for a gigabit chip connected via USB2.0. For comparison, a Raspberry Pi 3 B (not the plus version) with Fast Ethernet tested around 95Mb/s. As a control, I also tested the same iperf client against another computer connected over full gigabit at 950Mb/s.

Disk throughput was tested using dd for sequential reads and writes and iometer for random reads and writes against a flash key with an XFS filesystem. XFS used to be the recommended filesystem for Ceph until Ceph released BlueStore, and it’s still used for BlueStore’s metadata storage partition. The 32GB flash keys performed at 32.7MB/s sequential read and 16.5 MB/s sequential write. Random read and write with 16kiB operations yielded 15MB/s and 0.9MB/s (that’s 900kB/s) respectively using sysbench’s fileio module.

RBD Tests:

RADOS Block Device (RBD) is a block storage service, so you can run your own filesystem but have Ceph’s replication protect the data as well as spread access over multiple drives for increased performance. Ceph currently doesn’t support using pure erasure coded pools for RBD (or CephFS), instead the data is stored in the erasure coded pool and the metedata in a replicated pool. Partial writes also need to be enabled per-pool, as per the docs.

Once the pools were created, I mounted each and started running tests. The first thing I wanted to test was just sequential read and write of data. To do this, I made an XFS filesystem with the defaults and mounted it on a test client (Arch Linux, kernel 5.1.14, quad core, 16GiB RAM, 1xGbE, Ceph 13.2.1) and wrote a file using dd if=/dev/zero of=/mnt/test/test.dd bs=4M count=3072 iflag=direct. Initially, writes to the replicated rbd image looked decent, averaging a whopping 6.4MB/s. And then the VM suddenly got really cranky.

The Ceph manager dashboard’s health status. Not what I’d been hoping for during a performance test.

One host, rpi-node2 had seemingly dropped from the network. Ceph went into recovery mode to keep my precious zeroes intact, and IO basically ground to a halt as the cluster recovered at a blazing 1.3MiB/s. I couldn’t hit the node with SSH, so I power-cycled it. It came back up, Ceph realized that the OSD was back and cut short the re-balancing of the cluster. I decided to run the write test again, deleted the test file and ran fstrim -v /mnt/test, which tells Ceph that the blocks can be freed, so it frees up that space on the OSDs so I could re-run fresh.

The second test ended similarly to the first, with dd writing happily at 6.1MB/s until rpi-node3 stopped responding (including to SSH) at 3.9GB written. This time I stopped dd immediately and waited for the node to come back, which is did after almost two minutes. I checked the logs and saw that the system was running out of memory and the ceph-osd process was getting OOM killed. I also noticed that both nodes that had failed were the ones running the active ceph-mgr instance serving the dashboards.

I ran the test again, this time generating a 1GiB file instead and confirmed that it was the node with the ceph-mgs instance running out of memory. I also let the write finish, testing how well Ceph ran in the degraded state. At 2.8MB/s, no performance records were being set, but the cluster was still ingesting data with one node missing.

I have two options, the first is to move the ceph-mgr daemon to another device, but as I wanted the cluster to be self-contained to three nodes, so I opted for the second option. Option two is to lower the memory usage of the ceph-osd daemon. I looked at the documentation for the BlueStore config, and saw that the default cache size is 1 GiB, or as much RAM as the Pi has. That just won’t do, so I added bluestore_cache_size = 536870912 and bluestore_cache_kv_max = 268435456 to my ceph.conf file and restarted the OSDs. This means that BlueStore will use at most 512MiB of RAM for its caches with only 256MiB maximum for the RocksDB metadata cache.

I reran the 1GiB file test and had 3.5M B/s write speed and no OSDs getting killed. With the 16GiB file, writes averaged 3MB/s, but RAM usage at the halfway mark eventually got the OSD running on the same node as the active ceph-mgr killed. Again, the cluster survived, just in a less happy state until the killed OSD daemon restarted. I disabled the dashboard, and while this helped RAM usage, the ceph-osd daemon was still getting killed. I further dropped the BlueStore cache to 256MiB and 128MiB for the metadata store. This time I locked two of the three nodes up hard and needed to power-cycle them.

During the 1GiB file tests, this was the unhappiest the cluster got. The PGs were simply being slow replicating and caught up quickly once the write load subsided.

With the replicated testing being close to a complete failure, I moved on to testing the erasure coded pool. I expected them to be worse due the the increased amount of compute resources needed. I was wrong. I was able to successfully write the test file, and the worst I got was an OSD being slow and not responding to the the monitor fast enough and then recovering a few seconds later. Sequential writes averaged 5.7MB/s and sequential read was an average of 6.1MB/s, but I still had two nodes go down at different times. It seems that erasure coded pools perform slightly better, but can still cause system memory exhaustion.

One thing to note is that even with three nodes, Ceph never lost any data and was still accepting write, just very slowly. I hadn’t intended to test Ceph’s resiliency, as that has already been well tested, but it was nice to see that it kept serving reads and writes.

At this point, I don’t think that the 1GiB of RAM is enough to run RBD properly. Sequential writes looked to be around 6MB/s when the cluster hadn’t lost any OSDs or whole hosts. I never attempted to test random access, due to the issues with sequential reads and writes.

CephFS and Rados Gateway:

With RBD being a bit of a bust, I wanted to see if CephFS and Rados Gateway performed better. As this post is getting long, CephFS and RadosGW results are in a second post, along with a conclusion.

Featured

Raspberry Pi Ceph Cluster – Setup

I’ve used Ceph several times in projects where I needed object storage or high resiliency. Other than using VMs, it’s never something I can easily run on at home to test, especially not on bare-metal. Circa 2013, I tried to get it running on a handful of Raspberry Pi Bs that I had, but they had far too little RAM to compile Ceph directly on, and I couldn’t get Ceph compiling reliably on a cross-compiler toolchain. So I moved on with life. Recently, I’ve been looking at Ceph as a more resilient (and expensive) replacement for my huge file server. I found that it’s now packaged on Arch Linux ARM’s repos. Perfect!

Having that hurdle out of the way, I decided to get a 3 node Ceph cluster running on some Raspberry Pis.

The Hardware:

Ceph Cluster Gear
All the gear needed to make a teeny Ceph cluster.

Hardware used in the cluster:

  • 3x Raspberry Pi 3 B+
  • 3x 32GB Sandisk Ultra microSD card (for operating system)
  • 3x 32GB Kingston DataTraveler USB key (for OSD)
  • 3x 2.5A microUSB power supplies
  • 3x Network cable
  • 1x Netgear GS108E Gigabit Switch and power supply

My plan is to have each node run all of the services needed for the cluster. This isn’t a recommended configuration, but I only have three Raspberry Pis to dedicate to this project. Additionally, I’m using a fourth Raspberry Pi as an automation/admin device, but it isn’t directly participating in the cluster.

Putting it all Together:

  • Create a master image to save to the three SD cards. I grabbed the latest image from the Arch Linux ARM project and followed their installation directions to get the first card set up.
  • Note: I used the Raspberry Pi 2 build, because the Ceph package for aarch64 on Arch Linux ARM is broken. I’m also using 13.2.1, because Arch’s version of Ceph is uncharacteristically almost a year old and I wasn’t going to try to compile 14.2.1 myself again.
  • Once the basic operating system image was installed, I put the card into the first Raspberry Pi and verified that SSH and networking came up as expected. It did, getting a static IP address from my already configured DHCP server.
  • I first tried to use Chef to configure the nodes once I got them running. Luckily Arch has a PKGBUILD for chef-client in the AUR. Unluckily, it’s not for aarch64. Luckily again, Chef is supported on aarch64 under Red Hat. I grabbed the PKGBUILD for x86_64, modified it to work with aarch64, built and installed the package.
  • I created a chef user, gave it sudo access, generated an SSH key on my chef-server, and copied it to the node.
  • At this point, I had done as much setup on the single node I had, so I copied the image onto the other two microSD cards, and put them into the other Pis.
  • Chef expects to be running on a system that isn’t Arch Linux. After some time trying to get it working, I decided that I’d spent enough time trying to get it working.
  • With Chef a bust, I moved on to Ansible and re-imaged the cards to start fresh.
  • ceph-ansible initially worked better, due to Ansible being supported on Arch Linux, but the playbook doesn’t actually support Arch. I needed to make some modifications to get the playbook to run.
  • With some basic configuration of the playbook ready to go, I got the mons up and running pretty easily. But osd creation failed on permissions issues. Something in the playbook was expecting a different configuration that Arch Linux uses. Adding the ceph user to the “disk” and “storage” groups partially fixed the permissions issues, but osd creation was still failing. Ugh.
Ceph cluster in operation. The rightmost blue Raspberry Pi is being used as an admin/automation server and isn’t part of the actual Ceph cluster.

Time for Manual Installation:

While part of my goal was to try some of the Ceph automation, chef’s and ceph-ansible’s lack of support for Arch Linux meant that I wasn’t really accomplishing my main goal, which was to get a small Raspberry Pi cluster up and running.

So I re-imaged the cards, used an Ansible bootstrap playbook that I wrote and referred to Ceph’s great manual deployment documentation. Why manually deploy it when ceph-deploy exists? Because ceph-deploy doesn’t support Arch Linux.

MONs:

MONs or monitors stores the cluster map, which is used to determine where to store data to meet the reliability requirements. They also do general health monitoring of the cluster.

After following all of the steps in the Monitor Bootstrapping section, I had the first part of a working cluster, three monitors. Yay!

One difference from the official docs, in order to get the mons starting on boot, I needed to run systemctl enable ceph-mon.target in addition to the ceph-mon daemons, otherwise systemctl listed their status as Active: inactive (dead) and they didn’t start.

MGRs:

The next step was to get ceph-mgr running on the nodes with mons on them. The managers are used to provide a interfaces for external monitoring, as well as a web dashboard. While Ceph has documentation for this, I found Red Hat’s documentation more straight forward.

In order to enable the dashboard plugin, two things needed to be done on each node:

  • First, run ceph dashboard create-self-signed-cert to generate the self-signed SSL certificate used to secure the connection.
  • Then run ceph dashboard set-login-credentials username password, with the username and password credentials to create for the dashboard.
  • Running ceph mgr services then returned {"dashboard": "https://rpi-node1:8443/"}, confirming that things had worked correctly, and I could get the dashboard in my browser.

OSDs:

Now for the part of the cluster that will actually store my data, the OSD, which use the 32GB flash keys. If I wanted to add more flash keys, or maybe even hard drives, I could easily add more OSDs.

I followed the docs, adding one bluestore OSD per host on the flash key. One note, as I’d already tried to set the keys up using ceph-ansible, they did have GPT partition tables. I ran ceph-volume lvm zap /dev/sda on each host to fix this.

Additionally, I didn’t realize that the single ceph-volume command also sets up the systemd service, and I created my own. Now I had an OSD daemon without any storage in the cluster map. I followed Ceph’s documentation and removed the OSD, but now my OSDs IDs start at 1 instead of 0.

MDSs:

I plan on testing with CephFS, so the final step is to add the MetaData Server, or MDS, which stores metadata related to the filesystem. The ceph-mds daemon was enabled with systemctl enable ceph-mds@rpi-nodeN (N being the number of that node) and then systemctl enable ceph-mds.target so that the MDS is actually started.

Now What?

A healthy Raspberry Pi Ceph cluster, ready to go.

The cluster can’t store anything yet, because there aren’t any pools, radosgw hasn’t been set up and there aren’t any CephFS filesystems created.

If I had more Pis, I’d love to test thing with more complex CRUSH maps.

The next blog post will deal with testing of the cluster, which will likely include performance tuning. Raspberry Pis are very slow and resource constrained compared to the Xeon servers I’ve previously run Ceph on, so I expect things to go poorly with the default settings.

Results:

I split the results into two parts, the first includes test setup and RBD tests and the second will includes CephFS and radosgw tests as well as a conclusion.

Reverse Engineering Using Memory Inspection and Numerical Analysis

One of the more recent tools I’ve added to my repertoire for reverse engineering the game simulation is inspecting the memory of a running game to see how things vary without needing to pause the game and count individual tiles in the game many times.

Power Plants and Cheat Engine:

To calculate the MW rating for power plants to tiles powered I could have built a city and connected a power plant to it and then counted the number of powered tiles, but the game has a variable that stores the total power produced. I could just build a city and look at this value, swapping power plants out as needed.

My go to tool for finding, viewing (and sometimes editing) values in memory is Cheat Engine. I use it to search for a known value, pulled from the save file, and then I get the value at that memory location. SC2k always uses the same memory addresses each time it starts up, so I slowly built up a table of where memory locations end up, at least for the common stuff.

Why didn’t I just build the plant, pause, save, load it into my parser and look at the value? Memory inspection is way faster after the initial discovery, and allowed me to more rapidly test other things.

cheatengine values.PNG
Cheat Engine address table showing some of the interesting addressed. The total amount of power produced is highlighted.

Once I found the total power variable, it was pretty quick to calculate that the MW rating for power plants doesn’t actually mean anything and each power plant has an internal statistic for how many tiles it powers, including the tiles that make up the plant itself!
For more info, see the spec here.

Weather – Beyond Cheat Engine:

But what happens when I want to look at the correlation between various values over many game cycles? Does weather, specifically temperature, affect the crime rate? If so, how much? Weather definitely affects how much electricity solar and wind power plants produce as well as how much water is pumped, but what’s the formula there?

Basically, how do I get those values from the running game and make decisions on it?

I’m still working on answering that question, but at it’s most basic level, I want to read various memory values once per tick of the game. The tick count is also used to determine the date of the game, which starts at a certain epoch.

So I wrote a simple python program the dumps values at memory locations once per game tick and another that uses pandas, numpy and matplotlib to do some analysis.

For power plants, I didn’t need anything fancy. I could see by quick inspection of values that wind power is indeed strongly affected by altitude and to a lesser extent how windy it is. Solar power is also affected by how humid it is, but both also have a random component involved. The simulation specification documents show how the weather affects wind and solar power in greater detail.

But what does this “look” like? I logged several hundred thousand values and created a correlation graph using numpy, pandas and matplotlib and a city containing a single power plant and a single water pump.

power water correlation
Correlation between (solar) power and water production.

What’s this graph telling us? Well, solar power production is negatively correlated with how humid (-0.298) and windy (-0.245) it is. Not knowing exactly what this represents in the game, it seems that higher values are probably rainier. I didn’t bin the weather the same way, as the game treats it like discrete values rather than the “continuous” values for humidity and windiness. So solar power is definitely affected by weather. On the other hand, it looks like water production is positively impacted by the weather (0.486 correlation with humidity and 0.242 with windiness) which seems like in rainy weather pumps produces more water. This is about what the game does.

But what about crime? Below is a correlation graph generated the same way.

crime weather correlation
Correlation between crime and various game weather variables. The diagonal line are how much a thing correlates with itself, which is always 1.0 or completely correlated, as expected.

So what sort of correlation do we see between crime and how hot it is? Almost none, but there is a slight correlation, 0.0287, so it does affect the simulation a little bit. Unsurprisingly, the type of weather is well correlated with wind (0.525), humidity (0.877) and heat (0.287). This isn’t enough to know how the formula works, because there is randomness at work, but it’s a start to figuring it out.

I’m not sure if this sort of analysis is actually going to be useful working to reverse engineer the game, but it’s definitely a useful place to get started, and may yet be a useful tool to determine exactly what is happening internal to the simulation.

 

Thomnar – Multiple Tile City

One of my overarching goals for a re-implementation of SimCity 2000 is making city sizes much larger than the 128×128 tile minimum. This is based on one city I made where I built a 25 tile city in a 5×5 grid by manually reconciling the edges. Here’s a little more about that city. The city’s name, Thomnar, doesn’t mean anything but just sounded good.

The final result represents somewhere around 100 hours of work spread over 8 years while I was in high-school and University. Most of this was spent reconciling the edges of the maps, building out the actual cities and then a little bit in Photoshop stitching the screenshots together.

When building it, I tried to keep in mind how many North American cities looked. A dense core giving way to poorly planned sprawl, lots of horrible land use around roads, and true wilderness on the outskirts. Lots of power plants and other services to support the city, but given I have no way of moving power/water between tiles, each tile did need its own power and water utilities even if adjacent plants would be enough to power it.

thomnar-5x5.jpg
Thomnar, in its current full glory. Current population, 833,010 sims.

The first tile is actually based on the Charleston scenario city. When I finished building the city, looked like half the city was missing, so I added another tile on top to balance things out and make it look better.

thomnar-teaser
The second, continuation tile is at the top. Population: 250,760 sims.

Next I added some more tiles on the edge, which also looks like the view from the neighbours window.

thomnar neighbours
The neighbour’s window in game.

 

thomnar-teaser2
What I imagine the city’s neighbours would look like. Population: 395,900 sims.

Next I filled in the corner tiles to get a 3×3 grid, and then decided that the city looked weird without surrounding towns and wilderness, which resulted in this 4×5 grid.

thomnar-teaser4
4×5 grid of Thomnar. Population: 748,920.

From the 4×5 grid, I added another set of tiles to get the full square 5×5 grid. I created another couple of tiles that aren’t included yet, but by then, reconciling edges was starting to feel like a chore, so I stopped work on the city.

Gallery of some of the neat details I added to the city (if anything looks quirky, it’s because the images are rendered outside of the game using in-game assets):

Things SimCity 2000 Never Intended You to Do

This post is meant as a showcase of some of the proof-of-concept things I’ve done with SimCity 2000 that it had never intended you to do.

The way the game stores an in progress disaster is by setting the city to disaster mode and setting the disaster type in the MISC segment of the .sc2 file and then adding actual disaster tiles to the XTXT segment/layer. But what happens if we don’t go into disaster mode and place disaster tiles? Well, we get a city floating in clouds, and hell, a city that’s eternally on fire but never consumed.

And once you can change anything at will, there are other interesting things that can be done. Make mixed bridges? Easy. Edit the terrain after the city has been built? Yep, that’s a single value changed. Make tunnels cross? Well, sort of. The game doesn’t seem to know what to do with them. Avenues? Yep, just remove the intersections.

There’s a lot more possible, but I’ve been focused on getting more of the game reverse engineered over building better editing tools. I have considered building a standalone editor that would also function as a game frontend, but haven’t had time to do much with it.

I’ve also used the editing ability to make cities that have features some cities in real-life have. Rivers in many places flood, and often have levees along them, so I made a city that used the completely raised terrain tile as levees along a river instead of making a 3-tile wide strip of raised terrain.

levees.PNG
Levees around a city to protect a city from flooding. Normally the game wouldn’t allow you to do this, but it adds some realism, especially with the non-ramp access to the bridge.

SimCity 2000

Featured

SimCity 2000 is the second of Maxis’ well received city builders and is my favourite in the series, in addition to being one of my favourite games of all time. I’m not sure exactly why, but it’s the best blend of building and management and the art.

I found that SimCity 4 was all about micromanagement at a scale that wasn’t fun. The game was too hard, at least without mods to fix some of the more serious issues. The art style is very well done 8-bit graphics which I knew weren’t supposed to be hyper-realistic and never looked janky, unlike some of the later games. Maybe there’s also some nostalgia too.

Despite being released around a quarter-century ago as of this writing, I still play the game occasionally. I play the Windows 95 version, which is the definitive version. Unfortunately, it doesn’t run on modern versions of Windows without a patch to the executable, so the inferior DOS version is the only one commercially available presently.

Even when I was a child, playing the game, I’d often get to the (admittedly large for the time) limits of the 128×128 tile map and wish my city could somehow continue past it, but actually doing so was beyond my abilities then.

I’d built one city out of 25 individual tiles over several years during high-school and early University, painstakingly reconciling the edges as I went. It was slow going, and I eventually got tired of it as it was getting to be more and more work as it grew.

Thomnar 5x5
A city I called Thomnar, which doesn’t mean anything, that I built using 5×5 city tiles.

And then in 2014, I was finishing up my last semester of University and working, and I felt the draw to the game again. I still really liked that large, multi-tile city and wanted to work on it more, but reconciling the edges was annoying. Except this time, I had the tools and technical ability to start reverse engineering the game and working on a re-implementation.

My initial goal was simple: figure out the city binary file format and write something to automatically reconcile the edges.

As I figured out more and more of the format, my focus shifted from writing a tool to reconcile edges to a complete re-implementation of the game. This also involved figuring out the complete sprite file spec, as well as working on the text data files.

floating cloud city
Having the (nearly) complete city file spec also means that I can make cities that are impossible to make in the game, such as this city that is floating in the clouds.

I have gotten most of the .sc2 file spec figured out, all of the game sprite file spec as well as the .mif sprite spec and some of the text file specs done, which are kept up-to-date on my GitHub.

I also have written a Python library and related tools to open, edit and save out the modified .sc2 file, but it isn’t ready for public release yet. When it is, I’ll update this post.

SimCity 2000 Posts:

Things SimCity 2000 Never Intended You to Do
Thomnar – Multiple Tile City
Reverse Engineering Using Memory Inspection and Numerical Analysis

Useful SimCity 2000 Links:

Run the Windows 95 version on modern Windows
OpenSC2K – Open Source re-implementation
Pat Coston’s ClubOpolis
My documentation on the game