Over the next 6-12 months, Maplewave will transition away from our traditional datacenter model and move forward with a public cloud. Throughout this process, I’ll be writing regular blog posts that detail our journey and lessons learned.
In this blog, I’ll outline a little bit of our history and how we have gotten to where we are today. As I reflect on the last 30 years, I hope to give you insight into how our industry has changed, and how we have evolved accordingly. Hopefully it will help with similar decisions you may be facing.
We are a software company that has always had ties to the underlying hardware that runs the applications that we create. Born in the early days of client server technology, we even used to sell server and desktop systems to our customers so that they could use our software.
As we were born long before the Internet boom of the late 90’s, real-time hosting of the application and data was simply not an option. With the advent of DSL and fibre business Internet connectivity over the last 20 years, we were able to establish ourselves as an ASP (application service provider) and host our customers in our own data center.
When we began hosting our applications, we were very server-heavy; although our application is designed to be multi-tenant, we tended to build out our environments (for our bigger clients anyway) as one per server – and some customers had more than one.
The first data center I designed and built in the mid-2000’s had 8 racks of gear and had room to scale to almost double that; not super large by today’s standards, but not bad for a medium sized private business. All storage was internal to the servers (putting disk next to the CPU has to be fast, right?), and we had a LOT of disks, so the servers tended to be 2-3U and spun our power meters like pinwheels.
Eventually, we convinced ourselves that virtualization made sense, as did shared disk technology, and we began to greatly consolidate our infrastructure. At the same time, we also started to grow our customer base and expand our requirements, especially in terms of storage.
Today, we run multiple virtualized data centers (with fewer overall hosts), but with multiple SAN arrays, 10GbE networking and advanced security devices.
A few years back, we had an assessment completed by a consulting company who essentially asked us “why are you still using your own infrastructure, why are you not at least looking at IaaS? (Infrastructure as a Service)”.
As I had just recently rejoined the company coming off an engagement as a cloud architect, I was thinking the same thing. Companies of our size do not need to own servers and storage anymore if it’s not crucial to the business.
It’s taken us a while to process all the things that go into that idea, not the least of which is moving from a capex to an opex model, but we are finally there. The costing model is complex, but once you calculate the cost of adding storage and doing server/storage refreshes every 3-4 years, the cost model becomes clearer.
There are also headaches (and lost sleep) involved in dealing with hardware devices, including hardware refreshes, firmware and OS updates – let alone outright failures. We are a software development company; we know software really really well, so let’s focus our energies there.
We’ve been dabbling in public cloud environments for a while, but now it’s time to move forward and embrace it as our future.
In my next blog post, I’ll discuss how we are beginning our journey with a few simple but critical steps to define the guiding principles of the cloud architecture design.