Best practices for building on-prem appliances

It’s safe to say we’ve done our fair share of work in regards to helping companies package their apps to run on-premises ( on-premise ? ).

For this reason, we’ve decided to list some of our best practices. We hope to cover sufficient ground to help anyone out there, whether switching from a SaaS, an Installer, or starting from scratch.

First steps, a business decision

There are a few important business implications of building for on-premises. We like the idea of scaling a business, but it will eventually lead to the requirement of increasing support personnel to support enterprise customers.

Pricing models will vary greatly between a SaaS and an On-Premises appliance. You will likely not reach the same amount of customers, but the revenue per customer will increase significantly. What this means for your business is perhaps a smaller need for developers, and a larger need for support/sales people, as well as better long-term planning.

Long-term game plan

Unlike a SaaS, which you can shut down anytime, an on-prem appliance will need to be supported for quite some time. Your customers might opt to perform updates only once every 6 months. They will also likely continue using your software for years after acquiring their first license. It’s your responsibility to continue providing support long for appliances long after they’ve been released.

It’s best to avoid leaving your customers in a bad position if you decide to exit the market. We encourage writing open source software, or at least providing non-obfuscated code, to ensure they can continue using, updating and tweaking your code after you’ve moved-on.

Working with third-party libraries which require paid licenses will prevent long-term use of your on-prem appliance.

Using technology vendors who lock you into their closed platform will also make it difficult to migrate away down the road. It’s better to choose a technology vendor/partner who provides open source solutions, flexible/fair payment terms, and vast knowledge and experience in this domain (ex: Jidoteki).

It’s important to plan carefully before making such decisions, and to have a good long-term game plan before jumping into the world of on-premises.

Technology choices

Stable and proven technology are that way for a reason. When you build on top of stable and proven software, you can rest assured it will continue functioning as it should down the line.

New bleeding-edge tech might be appealing, thanks to the way they simplify things or improve development time, but they also serve as a double-edge sword. Unproven tech often contains bugs and critical issues which would never (or rarely) occur in battle-tested software. Most of the latest tech is quickly obsoleted by even newer tech. This poses a threat to enterprise customers who need a consistent and stable user experience.

Of course, software and security updates are always necessary, but forcing updates due to bad choices, and implementing breaking changes (again, due to bad technology decisions) is a tough sell to the enterprise. That’s also the reason many resist updating their software - the possibility of something breaking is too great a risk.

We’re not necessarily advocating the use of old beat-up technology, but if it’s been proven to work flawlessly for many years, chances are it will continue to work flawlessly for many more years.

Modify your software

No matter what, you will need to ensure your software is designed to run in an offline, standalone, sandboxed environment. Eliminate as many third-party dependencies as possible. If you must continue supporting a SaaS, use the same code-base (don’t fork and maintain two code-bases), and opt for feature flags based on an *Enterprise flag:

(when *Enterprise (enable-ldap-for-enterprise))

Ensure all network calls are sent to localhost or a unix domain socket. There should be no need or ability to store or retrieve data from third-party resources on the internet.

Here we discussed modifying a SaaS app to run in a VM.

Build automation

If your appliance builds are not automated, you are doing it wrong. There is no reason to build virtual appliances by hand. Files should not be copied and moved around manually. Everything should be built using automation and configuration management tools.

We prefer Ansible and Jidometa for creating reproducible builds on demand (or on schedule).

The idea behind the automation is to either run a set of scripts, or make a few API calls to generate your virtual appliance. Once it’s done, it can be tested manually (or automatically) to ensure it works as expected. This significantly cuts down the time to push out a new appliance or update for customers. It also decreases your own workload since you know you can always have a new appliance or update within just a few minutes.

The biggest advantage of build automation is reducing the possibility of errors.

Integrate management tools

Applying updates, debugging issues, changing network settings, uploading custom TLS certificates.. these are all things your customers will want to do themselves. Enterprise customers will expect some kind of management tools or interface to handle those changes.

We provide a set of open source scripts, an open source API, and an open source UI for managing an appliance. We open sourced it all because we want our customers, and their customers, to feel confident knowing the tools we provide are not secretly doing something they shouldn’t. You won’t get that guarantee when using a closed-source solution.

Behind the scenes, simple management scripts will allow customers to perform backups of their data, and other regular maintenance to ensure things keep running smoothly. Enterprise customers shouldn’t need to be *nix gurus to perform basic tasks, but if they need access to see how things work, that access should be provided - either on-demand, or automatically through SSH keys/passwords.

Simple update process

Updating an appliance in the wild was one of our earliest challenges. We’ve iterated on the process numerous times and are currently on our fourth approach - which we personally think is the best.

It’s important to ensure updates don’t destroy the appliance. For this reason, it’s best to run the appliance in memory, to prevent it from unintentional modifications, and to ensure it can always be updated as planned. Atomic updates enable rollback functionality if any part of the update process happens to fail.

Database migrations should happen after the update is applied successfully. This means separating the migration logic from the update logic, and having it run independently right after the update, or after the first reboot.

Sometimes enterprise customers want special features or require a special one-time emergency fix. With our update process, it’s possible to provide those types of one-off features, and then re-merge them for every other customer at a later date. This wouldn’t be easy in a “typical” environment, and would quickly become unmanageable.

Of course, automatic updates are nice, but enterprise customers don’t want that. They want one-click updates, and they want full control of when those updates are applied. It’s important to ensure a smooth and simple update process, which at most requires a simple reboot to activate the changes. We prefer that approach as it prevents service disruption until the customer decides.

Security hardening

Ensure every aspect of the appliance is secured as best you can. Local exploits are still serious, but much less of a concern than remote exploits and vulnerabilities in libraries such as OpenSSL, or NodeJS web servers.

To run a system in memory will prevent accidental OS modifications. Using hashes and signed updates will prevent malicious or incorrect updates from being applied.

All ports should be closed by a local firewall, except the ones required for the application to work. We open SSH and HTTPS ports for appliance management, and allow ICMP pings to verify connectivity. The less ports are open, the smaller the attack surface.

Remotely accessible software should be kept updated, but again this is slightly less of an issue in a “behind-the-firewall” on-premises environment (although, don’t get us wrong, we think it’s an extremely important issue, and we always keep our software up-to-date).

Privacy hardening

Customer privacy is often overlooked, and should be put it at the top of the list.

It’s important to ensure your appliance doesn’t leak private customer data onto the internet.

Enterprise customers typically deploy appliances in a private network without internet access, but in the event they don’t, you need to be certain their data never gets leaked.

Don’t phone home unless absolutely necessary, and even then, don’t transfer loads of private customer data. A simple UUID should be sufficient to keep track of customer installations/deployments, without divulging unnecessary user information.

We test our appliances with traffic sniffers before deployment, and try to notify customers of unencrypted data leaving the appliance, as well as data which may leak private information such as IP addresses (geo-location?).

Additional enterprise services

The enterprise loves LDAP, Single Sign-On, NFS, audit logs, and VMware tools. Your application should at least support a subset of those enterprise services. Our appliances have built-in support for NFS, audit logs and VMware tools. We’re looking at methods for adding additional enterprise services, but we’re still only building them based on customer demand.

If you plan to support enterprise customers, expect them to request such things, and expect to support it as well.

Pricing

Pricing for enterprise appliances can be a bit tricky. In some cases, it’s best to charge per CPU, in other cases it’s best to charge per user, per team, per installation..

It depends how your application works, what it does, and what will be most beneficial and fair for you and your customers. You’ll likely need to sell yearly licenses, as opposed to the typical SaaS monthly model.

When selling an on-premises enterprise appliance, you will often need to provide a “free beta” or “free trial” for them to evaluate. 30 to 90 days evaluations are not un-common before moving forward with a paid license. 

Occasionally it’s necessary to sign purchase orders and do some legal paperwork before selling your appliance, so the price you charge should offset that extra effort.

Jidometa

We built Jidoteki Meta (Jidometa), our own on-prem virtual appliance to enable our customers to quickly build and update their own on-prem virtual appliances.

It’s priced as a yearly perpetual license, which means our customers get monthly updates and new features, and can continue using Jidometa in the future, even if we move onto other things. We avoid obfuscating (except minified javascript) any code so they can see how it works behind the scenes, and even suggest modifications if needed.

The appliance is not entirely open sourced, but I can say maybe 99% of it is, which is a great move forward and a safe investment for our customers.

Of course, the biggest advantage: Jidometa is a complete solution which saves developers and companies an incredible amount of time and effort for building their on-prem virtual appliances. We even use Jidometa to build Jidometa, which means we’re constantly in front of our own software, improving things and making it better for ourselves, which (of course) is better for our customers as well.

Feel free to contact us for help with going on-premises. We hope these best practices have been useful for you, and we’ll be happy to answer any questions which may help you perfect your on-premises virtual appliance.