Identify Identity Lifecycles for Cloud App Security

Last month, we covered the tactics Twitter employs to keep users’ data safe. Stephen Lee, the director of platform solutions at Okta, also spoke at the recent PagerDuty security meetup about SaaS apps – the considerations companies need to make around their adoption and protecting the data inside them.

Provisioning the right applications

The great benefit of SaaS products is that they’re available to any user via the web. Yet cloud apps’ ease of use also presents the issue of access control: who gets access to which apps and at what level?

At Okta, automation helps solve one of the major issues of access control: provisioning. When a new employee comes on board – regardless of whether he works in operations, engineering, sales or some other department – he will need to be granted access to certain applications, and automation can greatly simplify this process.

Access control automation also lessens the risk that an employee will need to manually request access to an app down the line, helping reduce IT’s workload and enhancing productivity.

Plan around mobile device use

An important trend that SaaS is helping to power is the growing mobile-friendliness of workplaces. Mobility is great for employees, who are empowered to work where and how they want, in ways that would have been impossible just 10 years ago. Yet it presents real headaches for IT security teams.

Not only does company data live on an ever-greater number of mobile devices, which can easily be lost or stolen, but many of those devices are personal ones. What happens when an employee resigns or is fired – can her former employer be confident that company data won’t go with her?

These considerations demand a robust system for managing access control, one that makes it easy to grant or revoke access on a person-by-person basis.

Anticipate cloud for everything

SaaS isn’t just about enabling mobility. There are other benefits to adopting cloud technology in the enterprise, including cost savings, access to new features and user-friendliness. Yet the massive cloud shift – one of two major trends Stephen pointed to in the corporate tech marketplace, the other one being mobile device adoption – isn’t without security challenges.

For example, managing authentication and authorization when users are accessing apps from a number of locations on a number of different devices is quite hard. You’ve also got the interaction between the end-user and the actual applications – how do you ensure secure connections on networks you don’t control? Then there’s the matter of security audits, to which all public companies are subjected. If you get audited, you’ll have to prove you can generate data around “who has access to what”.

Stephen suggested thinking about security in the context of “identity lifecycles”. The first step in developing a comprehensive security plan is to map out these lifecycles for both internal and external users, thinking in terms of access control. The lifecycle approach makes particular sense when used in concert with a “secure by default” ethos, where security checks are baked in to the product development process.

Think about your users

Another benefit to identity lifecycles: they force companies to identify whose access, precisely, they are controlling. Whether it’s actual users, in Twitter’s case, or engineers and operations folks internally, “lifecycling” requires a holistic look at security.

At Okta, the question is one of accuracy: can people access what they need to access on a reliable basis? Steven presented a view of Okta’s end-user as the Okta security team’s “customer”.

“They need to be able to access what they need, but they shouldn’t be able to access what they don’t.” Stephen Lee, Okta

Thinking in terms of others’ needs is a rare thing in the business world, not least in IT, which spends most of its time immersed in device provisioning, bug fixes, system architecture and so on. Yet Stephen points the way to a better, more “customer”-friendly version of enterprise IT.

Watch Stephen’s full presentation here:

Share on FacebookTweet about this on TwitterGoogle+
Posted in Events, Partnerships | Tagged , , | Comments Off

You’ve got data. Now what?

Guest blog post from Angel Fernández Camba, Developer at Logtrust. With Logtrust you will view all your business insights in dashboards and get alerts on any parameter you need, always in real-time.

Companies don’t have to search far and wide to find ways to increase their top line revenue. Many are sitting on a wealth data waiting to be analyzed. But how exactly do you turn data into money? The answer is simple, logs. Logs are events that happen inside a server, application, and firewall and they contain a lot of interesting information that every department within a company can use. Marketing can filter the data to discover new sources of revenue while IT teams can detect and track suspicious behavior in real-time to eliminate downtime and increase customer loyalty.

it_performance

Data is all around us

Mobile phones, tablets and laptops are connected to the company’s servers and whenever someone is surfing the web, sending emails or using applications, information is generated, This vast amount of “hidden” information holds actionable intelligence waiting to be revealed, but unfortunately most of it is unstructured. This makes it harder to discover useful information among all the data. Some companies turn to big data tools to crunch numbers, but resources are limited. Those who are familiar with log management solutions, create in-house scripts to get the job done. This situation translates into a lot of effort on limited resources so you need to find an easier way to summarize all of your information to find the valuable insights to make a positive  impact on your business.

big_data_sources

How to bring your data to life

Understanding your data is the most important thing. You can have tons of useful data but without the right tools it is worthless. Data representations can help you understand relations between variables when an error or event happens. This is very important because you may want to know how similar events have occurred or categorize them. When you are diagnosing a problem, it’s helpful to have a visual representation of the data to spot trends instead of scanning millions of rows on a table. Also, it’s important the way in which the data is represented. Depending on the nature of the data, there are many ways to present the data in a visual form.

Graphs can give you X-ray vision

Some of us are visual learners so graphs can be a good way for us to quickly highlight hotspot areas. Imagine we have a website and we want to know when the service is not working properly. First, we should look at how many requests we are dispatching, and how many of them have errors. Using a map can help discover if we have problems in a specific country or worldwide. Below, for example, is a request distribution map:

global_heat_map

Another way you can see the same information in another kind of visual representation are voronoi charts:

spain_log

Since Spain is the problem area, we can drill down one level deeper to see which parts of Spain has errors. A visualization of errors by city can give you further insight around where problem areas are. To contrast, how does this error distribution compare to the rest of the world world?

The more requests a country sends, the more likely it is a source of errors but let’s assume that our problem is not affecting just a country but all over the world. Graphs can contain the error distribution map of a given day, but in the graph below we can take a closer to at the last hour to see if the problem follows the same distribution.

Hourly_error_distribution

The error distribution map for the last hour is focused in Germany. These stats could lead us to detect system anomalies. If we see the server stats we can see where the errors are coming from.

CPU_overland

With the dashboard above, we can see our server had a CPU overload because of system maintenance routines. If this CPU overload starts impacting your customers, you’ll need to notify your on-call engineer to resolve these issues. Get more value from your data by integrating PagerDuty and Logtrust today.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Partnerships | Tagged , , , , | Comments Off

Coming Soon – Advanced Reporting

PagerDuty can help you gain insight into your Operations, helping you better manage and prevent incidents. With PagerDuty, you can streamline your incident response, manage on-call scheduling more efficiently, and soon – analyze & prevent incidents.

Soon we’ll be launching Advanced Reporting to take your Operations to the next level. With our new reports, you’ll be able to see what’s going on across your infrastructure, analyze trends and turn insights into action.

Want to get ideas for how you can make smarter Operations decisions with data? Check out a few of our resources:

Your infrastructure in a single view

PagerDuty brings all of your monitoring services into a single view so you can easily analyze incidents across your entire infrastructure. Our dashboards will help you see alerts over time, by service and by team so you can reduce alerts over time and eliminate non-actionable ones.

Analyze trends

Once you know what’s happening, dig deeper to understand why. Are your incidents going up or down over time? Which incidents are taking the longest to resolve? How quickly is the team acknowledging and resolving alerts? Our new reports will show you these trends.

Turn insights into action

With a better sense of the hotspots in your infrastructure and your team’s workflow, you can more effectively prioritize your work. Focus on what’s really going to drive greater reliability rather than the problems that happen to be noisiest this week. Drive reductions in your MTTR by finding and eliminating bottlenecks that are slowing down the team. And finally, keep everyone happy and healthy by monitoring the incident workload and proactively giving time off if you notice things have been crazy recently.

For example, let’s say you noticed that you always have more incidents on Mondays. Digging deeper, you see that you’re getting a lot of alerts about a slow response time for one of your API endpoints. It turns out there’s a particularly expensive query that’s running on your database at this time, generated by users who are running weekly reports in the app. You work with the application development team on a way to improve the query, and in the meantime, you increase the threshold for this alert on Mondays (since after all, it didn’t cause an outage) and make sure everyone on your team knows how they can quickly investigate this incident.

You’re invited to our public preview

Before we release Advanced Reporting, we’ll launch a public preview. All of our customers, regardless of their plan, will have the chance to test drive the reports during our preview. Stay tuned – we’ll be sure to let you know when the new reports are available in your account.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Features, Operations Performance | Tagged , , , , | Comments Off

Defending the Bird: Product Security Engineering at Twitter

Alex Smolen, Software Engineer at Twitter, recently spoke to our DevOps Meetup group at PagerDuty HQ about the philosophies and best practices his teams follow to maintain a high level of security for their 255+ million monthly active users.

Security in a Fast-Moving Environment: The Challenges

Twitter is one of the world’s most widely used social networks and they are continuing to add users at a steady clip.

While Twitter’s growth is exciting, it also poses challenges from a security standpoint. Because so many people count on Twitter to deliver real-time news and information, it’s a constant target for hackers. Two past incidents illustrate why Twitter security matters:

  • When The Associated Press’ account (@AP) was compromised and a tweet was sent about a nonexistent bombing, it drove down the stock market that day
  • When spam was sent from Barack Obama’s account, Twitter received an FTC consent decree related to information security

Twitter’s fast growth also demands lots of infrastructure investment, which forces the company’s security team to move quickly. The site was established as a Rails app, but it’s since been switched to a Scala-based architecture. That change demanded all-new tools and techniques regarding security.

Plus, Alex noted the security team is responsible for many other apps that they have acquired on top of Twitter itself.

The First Step in Securing Twitter: Automation

Automation is one of the strategies that both PagerDuty and Twitter use to optimize for security. The driving force behind automation at Twitter is a desire to employ creativity or judgment in everything the engineering team does.

“When we’re doing something and we think it’s tedious, we try to figure out a way to automate it.” – Alex Smolen, Software Engineer, Twitter

It was at one of Twitter’s Hack Weeks, which is like a big science fair, where the issue of automation first arose. From those initial efforts, the security team created a central place to manage information and run both static and dynamic analyses on security.

Automation helps Twitter’s engineers find security issues early on in the development process. When security problems do crop up, Twitter’s automation tools – in concert with PagerDuty’s operations performance platform – help assign incidents to the right people, so problems get solved more quickly.

One example is a program called Brakeman, which is run against Rails apps and shows all of the vulnerabilities in the apps’ code. If a vulnerability is discovered, the developer is alerted so they can get to the issue quickly. The goal is to close the loop as fast as possible, since the later something is discovered, the more complex and expensive resolve.

Other tools include Coffee Break for Java scripting and Phantom Gang, which dynamically scans Twitter’s live site. As with Brakeman, issues are assigned to the right on-call person for the job.

The Second Step: Robust Code Review Process

Security not just the security team’s responsibility but is owned by many engineers. There are also specific teams that deal with spam and abuse.

On the theme of shared accountability, Twitter’s developers are encouraged to work out security kinks early on in the code-development process. For sensitive code, as soon as code gets submitted the code also gets a security review. Devs can also use a self-serve form to request the security team’s input.

The security engineering team keeps itself accountable with the help of a homebuilt dashboard showing which reviews need to be done. Once upon a time, Twitter’s security engineers used roshambo to assign code reviews, but as their team scaled they now run a script to randomly assign code reviews.

“Roshambo is really hard to do over Skype.”

The Third Step: Designing Around Users

Twitter users, all 200-plus-million of them, have a vested interest in the site remaining secure. For that reason, some of Twitter’s security measures are customized for specific use cases.

One is two-factor authentication, which has been available on Twitter for some time. Initially, it was SMS-based; today, there is a natively built version that can generate a private key to sign login requests.

Another user-facing measure is an emphasis on SSL. Twitter was one of the first major services to require 100% SSL. Yet because many sites still allow the use of non-SSL connections, Alex’s team has built in HTTP Strict Transport Security (HSTS), which requires users to visit the SSL version of the site. Another strategy in use is certificate pinning. If someone tries accessing Twitter with a forged certificate, the native client won’t accept it.

Ultimately, Alex said, security is about enabling people – both users and Twitter’s own engineers. Given that Twitter’s security team represents about 1% of all the engineers in the company, keeping Twitter secure isn’t easy. But with the right processes and tools, those engineers can do their jobs effectively and keep Twitter humming.

Watch Alex’s full presentation here:

Stay tuned for blog posts around the other two security meetup presentations from Stephen Lee (Okta) and our very own Evan Gilman.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Events, Security | Tagged , , , | Comments Off

Effective Start / End Practices for On-Call Scheduling

Since we launched on-call handoff notifications, lots of our customers have used them to be notified about their on-call responsibilities to make sure they never forget when they’re on-call. Over the years, we’ve seen a variety of on-call schedules and thought we’d share some of the more favored practices we’ve seen.

Exchange Shifts During Business Hours

Below is a distribution of all start and end times for on-call shifts scheduled within PagerDuty:

ochon_01

The most popular time to handoff your on-call shift is Midnight, followed by 8:00 AM and 9:00 AM, then 5:00 PM and 6:00 PM. Despite the popularity of the midnight swap, as PagerDuty, we recommend having your handoff occur during business hours, preferably when both parties are present in the office. Unless you both happen to be in the office at midnight, in which case, go home.

Switching your shift while on-site gives you the opportunity to talk to the next person going on-call about any issues that occurred during the previous shirt or to give a heads up on anything they may want to be on the look-out for.

At GREE, their team syncs up every Monday morning to review alerts from the previous week, go over the upcoming schedule and handoff the rotation to the next on-call team. This gives each team additional insight into the week ahead and makes sure that everyone knows who is the primary, secondary and manager responsible for keeping GREE reliable each week.

Don’t Switch Shifts Over the Weekend

Below is a graph of shift handoffs distributed by day of the week:

ochon_02

By exchanging on-call responsibility while on-site, we’d want to see less shifts occurring over the weekend. This distribution closely aligns to our scheduling philosophy at PagerDuty. By switching shifts on Monday, you are able to recount an entire week of data with minimal confusion.

Or you can schedule your shift exchanges during your weekly team meetings. This will still allow you to review information and give a heads up to any potential problems that may be faced.

Keep Regular Shift Lengths

Another hot topic in terms of on-call scheduling is shift length. Should you switch weekly? Daily? Hourly? While much of this may depend on the size of your team, you will also want to consider other factors. You may want review your historical alerting data to see if there are any hot times in your systems to make sure no single person is getting the short end of the stick, leading to burnout.

Below is a distribution of popular shift lengths from 1 hours up until 2 weeks:

ochon_03

The most popular shift lengths seem to be 8 hours, 12 hours, 1 day and 1 week. Keeping simple shift lengths means less confusion and forgetfulness about when you begin or end a shift.

So when should you use each of these shift lengths?

  • 8 Hours – Great for someone who is covering the business day. You may have another team covering off-business hours.
  • 12 Hours – This is popular for teams utilizing PagerDuty’s Follow-the-Sun schedule, which allows your international teams to be on-call during hours they would be awake.
  • 1 Day – Simple for medium size teams where everyone is going to be responsible for one day.
  • 1 Week – Great for small teams so they don’t have to toss responsibility back and forth to each other.

Find a Schedule that Works for Your Team

At PagerDuty, each internal team handles on-call scheduling differently.  Our Operations Team has a simple weekly rotation, while our Realtime Team has a weekday / weekend rotation where people are on call during the week, then on call during the weekend. This is because our Realtime Team is slightly larger than our Operations Team, so they wanted to have team members on call more often and for shorter shifts to prevent operational tasks from getting rusty.

Our actual Real-Time team schedule:

on-call-blur

It’s important to find a process that works for your team. Even within a single organization, different departments may find one approach works better than another. PagerDuty’s On-Call Schedules give you the ability to customize your team’s shift however works best for you. If you’re not quite sure where to start, just remember the basics: switch shifts when both parties are present in the office (if possible) and maintain a standard shift length (e.g. 12 hours, 1 days, 1 week) to help avoid confusion for when someone’s shift may be starting and ending.

To make this transition more convenient, we also began offering heads up time for when you start and end your shift. Simply log into your PagerDuty account and edit your On-Call Notification Rules from your profile page to get started.

 

Share on FacebookTweet about this on TwitterGoogle+
Posted in Features, Operations Performance | Tagged , , , | Comments Off

Get Notified Before You Go On-Call

In February, we launched On-Call Handoff Notifications so you never forget that you are on-call, missing an alert!

By popular demand, we’ve tweaked this feature to let you decide when you want to be notified of your shift, up to 48 hours before your shift begins.

OCHON_Update

From your profile page in PagerDuty scroll down to On-Call Handoff Notification Rules. Simply select the hours before your shift before or ends to get notified and select your notification method.

Are you a bit of a snoozer? And need to be reminded multiple time. No problem, you can set up to 10 On-Call Handoff Notification Rules.

Pro Tip: When deciding when you get notified on your shift, take a glance at your schedule to see when you typically go on call. If your shift switches in the morning, you may not want to set your notification to needless wake you up in the middle of the night.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Announcements, Features | Tagged , , , | Comments Off

Meetup: Keeping Customer Data Safe

pagerduty_security_meetup_71114

This Friday, July 11th at 12:00 PM we’re hosting our second Meetup at PagerDuty HQ. Swing by, grab a slice of pizza, a cup of beer and learn all about how you can keep your customers’ data safe from Twitter, PagerDuty and Okta.

RSVP on our meetup group to attend. Not in San Francisco? No sweat, we’ll also be streaming the event live, register here to save your spot.

alex_twitterAlex Smolen, Software Engineer at Twitter. Alex has a masters degree from the School of Information at UC Berkeley. Previously, he was a web security consultant at Foundstone, a division of McAfee.

With over a billion global registered users, Twitter’s security team is responsible for keeping every account safe and to protect Twitter a freedom of speech tool. From two-factor authentication to geo-signals, threat levels are different for each user and account-level security needs to be granular to identity and stop hackers for their diverse user base. Working closely with engineering teams across the company to design and implement secure systems, Alex Smolen and his security team use an automated approach to deploying a specific suite of tools to proactively find and fix vulnerabilities.

evan-pagerdutyEvan Gilman, Operations Engineer at PagerDuty. Evan is a Senior Engineer on our Operations Team and when Evan isn’t in the SF office, you can find him with a camera in an exotic part of the world.

Evan will discuss how we establish security standards at PagerDuty and constantly validate our security architecture. Evan will also dive into how PagerDuty creates fault tolerant protocols and how to set up monitoring to immediately tackle any security threat. For PagerDuty, protecting customer data not only builds customer trust, but helps ensure maximum uptime across their platform.

stephen-OktaStephen Lee, Director of Platform Solutions at Okta. Stephen is charge of product strategy and evangelism, focusing on solutions for ISV/SI partners and customers. Prior to joining Okta, Stephen spent 10+ years at Oracle serving multiple roles from engineering to product management in the area of Identity Management. With the ever-expanding number of devices, cloud applications, and people (employees, partners, customers and consumers), IT is facing a tough challenge to securely and efficiently manage access. When managing security for internal users, the problem often spans across IT, HR, and Operations. When managing security for external facing applications, the problem goes beyond IT and business owners – potentially involving partners and customers’ IT.

RSVP for Meetup | RSVP for Live Stream

*Doors open at 11:30 AM. Event will start promptly at 12:00 PM and doors will close at 12:05 PM.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Events, Security | Tagged , , | Comments Off

Security Monitoring, Alerting and Automation

Constant validation is an essential piece of PagerDuty’s security methodology – and it takes place by way of continuous monitoring and alerting. A robust monitoring system helps us proactively detect issues and resolve them quickly.

Here are a handful of the monitoring and alerting tactics that we employ.

Port Availability Monitoring

Using our dynamic firewalls, we maintain a list of ports that should be open or closed to the world. Since this information is held in our Chef server, we are able to build out the checks for which ports should be open or closed on each server. We then run these checks continuously, and if one fails, we receive a PagerDuty alert for it. We use a framework called Gauntlt to do this, as it makes simple checks against infrastructure security very easy.

Centralized Logging and Analysis

We currently use Sumologic for our centralized logging. From a security standpoint, we do this because one of the first things an attacker can do is to shut down any logging to hide their tracks. By shipping these logs somewhere else, setting up pattern alerts on them, we can quickly react to problems that we find. In addition to this, we also use OSSEC to collect and analyze all syslog and application log data.

Active Response

Lastly, for well understood attacks, we have tools in place that can take action without any input from our team members. We are still very early in our active-response implementation, but as our infrastructure grows, we will need to build our more of these solutions so we are not constantly reacting to security incidents.

  • DenyHosts. We have deployed DenyHosts to every server in our infrastructure. If a non-existent user tries to login or if there is another brute force attack, we actively block the IP. While we have external SSH disabled on our infrastructure, we still leverage a set of gateway or ‘jump’ servers to access our servers. Since setting this up last July, we have blocked 1,085 unique IP addresses from accessing our infrastructure.

  • OSSEC. We use the open-source intrusion detection system OSSEC for detecting strange behavior on our servers. It continuously analyzes critical log files and directories for anomalous changes. OSSEC has different ‘levels’ of alerts; low- and medium-level ones will send out an email, while high-level alerts will create a PagerDuty incident so a member of our Operations team can immediately respond to the problem. We are not currently leveraging OSSEC’s built-in blocking abilities, but as we learn more about the common attack patterns on our infrastructure, we plan on enabling them.

Being proactive about monitoring is how we keep our services up and running. The active-response tools listed above hint at where we’d like to go with our security architecture.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Security | Tagged , , , , | Comments Off

Alert on the Internet of Everything

Our customers are natural tinkers and builders, and we’re excited to launch our integration with Temboo.

Many PagerDuty customers have found great success in using our solution to alert the right person and teams when issues occur in their systems and software. Some PagerDuty customers have been applying our alerting and on-call capabilities to other unique use cases such as creating an on-call rotation for roommate chores and sending alerts when trees are illegally cut down.

Temboo offers a unique programming platform that normalizes access to 100+ APIs, databases, and code utilities to give developers the ability to connect to other applications without all the headache.

APIs are powerful, but require maintenance

Applications aren’t useful when they are siloed. APIs are a common set of requirements to help disparate applications to talk with one another. For developers, the need to maintain these integrations and keep up with API documentation is a hassle. Temboo sits on top of APIs to abstract the complexity from managing and integrating with other applications. With Temboo, you can generate just a few lines of code in the programming language of your choice from your browser, and use those few lines to easily incorporate the benefits of over 2000 API processes into your project.

Helping makers connect

Arduino is an open-source, lightweight computer designed to provide an easy way for makers to create devices that interact with their environment using sensors and actuators. The uses of Arduino are endless. With the ability to sense the environment, tinkers have created robots, thermostats, and motion detectors from scratch. Temboo partners with Arduino to make it easier for projects to interact with web applications. With Temboo, every Arduino can easily grab data and interact with web-based services like Fitbit, Facebook, Google, and now PagerDuty. Temboo’s integration with PagerDuty (aka PagerDuty Choreos) will make it easier for Arduino and other hardware to trigger PagerDuty alerts.

1-Arduino-Temboo-schema

For example, if you really want to buy a drone on eBay and want to get real-time alerts when it is listed, with Temboo’s eBay and PagerDuty Choreos, you can do just that. Or if you want to receive an alert whenever the humidity in your greenhouse dips below a certain level, you can use Temboo’s PagerDuty Choreos for that, too. Or even if you just want an alert every time the weather at the beach is warm enough to go swimming, Temboo and PagerDuty can take care of that as well–all this and more can be done with just a few short lines of code thanks to Temboo’s integration with PagerDuty.

Let your imagination take you far and away. Read this integration guide to start connecting PagerDuty with the Internet of Everything.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Partnerships | Tagged , , , , | Comments Off

10 Common Ops Mistakes

Updated 7/24/2014: This blog post was updated to more accurately reflect Arup’s talk.

Arup Chakrabarti, PagerDuty’s operations engineer manager, stopped by Heavybit Industries’ HQ to discuss the biggest mistakes an operations team can make and how to head them off. To check out the full video, visit Heavybits Video Library.

1. Getting It Wrong in Infrastructure Setup

Creating Accounts

A lot of people use personal accounts when setting up enterprise infrastructure deployments. Instead, create new accounts using corporate addresses to enforce consistency.

Be wary of how you store passwords. Keeping them in your git repo could require you to wipe out your entire git history at a later date. It’s better to save passwords within configuration management so they can be plugged in as needed.

Selecting Tools

Another good move with new deployments: select your tools wisely. For example, leverage PaaS tools as long as possible – that way, you can focus on acquiring customers instead of building infrastructure. And don’t be afraid to employ “boring” products like Java. Well-established, tried-and-true tech can let you do some really cool stuff.

2. Poorly Designed Test Environments

Keep Test and Production Separate

You don’t want to risk your test and production environments mingling in any way. Be sure to set up test environments with different hosting and provider accounts from what you use in production.

Virtual Machines

Performing local development? There’s no way around it: applications will run differently on local machines and in production. To simulate a production environment as closely as possible, create VMs with a tool like Vagrant.

3. Incorrect Configuration Management

Both these Ansible and Salt are tools that are really easy to learn. Specifically, Ansible makes infrastructure-as-code deployment super-simple for ops teams.

What is infrastructure-as-code? Essentially, it’s the process of building infrastructure in such a way that it can be spun up or down quickly and consistently. Server configurations are going to get screwed up regardless of where your infrastructure is running, so you have to be prepared to restore your servers in as little time as possible.

Whatever tool you use, as a rule of thumb, it’s best to limit the number of automation software tools you’re using. Each one is a source of truth in your infrastructure which means it’s also a point of failure.

4. Deploying the Wrong Way

Consistency matters

Every piece of code must be deployed in as similar a fashion as possible. But getting all of your engineers to practice consistency can be a challenge.

Orchestrate your efforts

Powerful automation software can certainly help enforce consistency. But automation tools are only appropriate for big deployments – so when you’re getting started, Arup suggests running development using git and employing an orchestration tool. For example, Capistrano for Rails, Celery for Python or Ansible and Salt for both orchestration and configuration management.

5. Not Handling Incidents Correctly

Have a process in place

Creating and documenting an incident management process is absolutely necessary, even if the process isn’t perfect.

You should be prepared to review the incident-management document on an ongoing basis, too. If you’re experiencing lots of downtime, reviews won’t really be necessary.

Put everyone on-call

It’s becoming less and less common for companies to have dedicated on-call teams – instead, everyone who touches production code is expected to be reachable in the event of downtime.

This requires a platform (like PagerDuty) that can notify different people in different ways. What really matters is getting a hold of the right people at the right time.

6. Neglecting Monitoring and Alerting

Start anywhere

The specific tool you use for monitoring is less important than just putting something in place. PagerDuty uses StatsD in concert with Datadog; open-source tools like Nagios can be just as effective.

If you have the money, an application performance management tool like New Relic might be a good fit. But, what matters most is that you have a monitoring tool on deck.

“You have no excuse to not have any monitoring and alerting on your app, even when you first launch,” – Arup Chakrabarti, Engineering Manager, PagerDuty

7. Failing to Maintain Backups

Systematizing backups and restores

Just like monitoring and alerting, backing up your data is non-negotiable. Scheduling regular backups to S3 is a standard industry practice today.

You should try restoring your production dataset into a test environment to confirm that your backups are working as designed at least once a month.

8. Ignoring High Availability Principles

‘Multiple’ is the watchword

Having multiple servers at every layer, multiple stateless app servers and multiple load balancers is a no-brainer. Only with multiple failover options can you truly say you’ve optimized for HA.

Datastore design matters, too

Datastores (like Cassandra) are essential because with multimaster data clusters, individual nodes can be taken out with absolutely no customer-facing impact. Clustered datastores are ideal in fast-moving deployment environments for this reason.

9. Falling Into Common Security Traps

Relying solely on SSH

Use gateway boxes instead of SSH on your database servers and load balancers. You can run proxies through these gateways and lock traffic down if you suspect an incursion.

Not configuring individual user accounts

When an employee leaves your organization, it’s nice to be able to revoke his or her access expediently. But there are other reasons to set people up with user accounts to your various tools. Someone’s laptop may get lost. An individual might need his password reset. It’s a lot easier, Arup notes, to revoke or reset one user password than a master account password.

Failing to activate encryption in dev

Making encryption a part of the development cycle helps you catch security-related bugs early in development. Plus, forcing devs to think constantly about security is simply a good practice.

10. Ignoring Internal IT Needs

Not strictly an operations problem, but…

IT isn’t always ops’ concern. But on certain issues, both teams are stakeholders. For example:

  • Commonality in equipment: If an engineer loses her custom-built laptop, how long will it take to get her a replacement? Strive for consistency in hardware to streamline machine deployments.

  • Granting access to the right tools: On-boarding documents are a good way to share login information with new hires.

  • Imaging local machines: With disk images stored on USB, provisioning or reprovisioning equipment is a snap.

  • Turning on disk encryption: With encryption, no need to worry if a machine gets lost.

There are millions more mistakes that operations teams can make. But these 10 tend to be the most commonly seen, even at companies like Amazon, Netflix and PagerDuty.

Have your own Ops mistake you’d like to share. Let us know in the comments below.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Operations Performance | Tagged , , , | 8 Comments