How to Ditch Scheduled Maintenance

You like sleep and weekends. Customers hate losing access to your system due to maintenance. PagerDuty operations engineer Doug Barth has the solution:

Ditch scheduled maintenance altogether.

That sounds like a bold proposition. But as Doug explained at DevOps Days Chicago, it actually makes a lot of sense.

Scheduled maintenance tends to take place late at night on weekends—a tough proposition for operations engineers and admins. Customers require access at all hours, not just daylight ones. And scheduled maintenance implies your system is less reliable than you think, because you’re afraid to change it during the workday.

The solution? Avoid it altogether, and replace it with fast, iterative maintenance strategies that don’t compromise your entire system.

That might sound a bit ‘out there.’ But shelving scheduled maintenance is easier than you think. In his talk, Doug offered four ways to do it.

Deploy in stages

First thing’s first: if you discard scheduled maintenance, your deployments need to be rock-solid. They should be scripted, fast and rolled back quickly, as well as tested periodically to ensure rollbacks don’t lag.

They also need to be forward and backward compatible by one version. It’s not an option to stop the presses when you push out a new version. Red-blue-green deployments are crucial here, as they ensure only a third of your infrastructure undergoes changes at any given time.

Lastly, stateless apps must be the norm. You should be able to reboot an app without any effect on the customer (like forced logouts or lost shopping carts).

Send canaries into the coal mine

Use canary deploys judiciously to test rollouts, judge their integrity and compare results. These test deployments affect only a small segment of your system, so bad code or an unexpected error doesn’t spell disaster for your entire service.

Doug suggested a few practical ways to accomplish this:

  • Gate features so you can put out code dark and slowly apply new features to a subset of customers.
  • Find ways to slowly bleed traffic over from one system to another, to reduce risk from misconfiguration or cold infrastructure.
  • Run critical path code on the side. Execute it and log errors, but don’t depend on it right away.

As Doug summed it up for the DevOps Days crowd: “Avoid knife-edge changes like the plague.”

Make retries your new best friend

Your system should be loaded with retries. Build them into all service layer hops, and use exponential backoffs to avoid overwhelming the downstream system. The requests between service layers must be idempotent, Doug emphasized. When they are, you’ll be able to reissue requests to new servers without double-applying changes.

Use queues where you don’t care about the response to decouple the client from the server. If you’re stuck with a request/response flow, use a circuit breaker approach, where your client library delivers back minimal results if a service is down—reducing front-end latency and errors.

Don’t put all of your eggs in one basket

Distribute your data to many servers, so that no one server is so important you can’t safely work on it.

At PagerDuty, the team uses multi-master clusters, which help with operations and vertical scaling. They also use multiple database servers like Cassandra: No one server is that special, which means operational work can happen during the day.

Put together, these strategies help admins and operational engineers sleep more, worry less and maintain better—all ahead of schedule.

 Questions? Share your thoughts in the comments below.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Operations Performance, Reliability | Leave a comment

October Hackday: iOS 8, Dev Docs ChatOps, and more

We love hackday here at PagerDuty – it’s a great opportunity for everyone at the company to work on projects they’re passionate about, get the creative juices flowing, and see how we can mix up the tools and technologies we know to help out our users and each other. Last month was one of PagerDuty’s most exciting hackdays, with a great mix of cool “What if?” projects, open source contributions, and little tools to help out around the office.

We’ve really been enjoying iOS 8 here, and love all the new APIs that came with it. Steve won our “Awesome” category with a proof of concept that uses the Yosemite/iOS8 “Handoff” feature to start incident response on an iPhone, then grab it where it left off on your Mac’s browser.

pagerduty_oct_hackday_web-2

Also within the iOS 8 theme, Alper and Clay showed off a “Today” widget that lets iPhone users know when they’re on-call next:

pagerduty_oct_hackday_web-4

We actually just got these two projects into our newest iOS build, so look out for an update on your phone and download the app if you don’t have it!

We also had some great dev-tool projects.

Grant, Amanda, and Greg have been giving our developer docs some much needed love. Greg gave them a spiffy new skin; Grant moved them behind Cloudflare (improving latency for our EU customers) and enabled HTTPS (security yay!); and Amanda started on moving them from a custom Rails app to a static Jekyll backend.

Screenshot 2014-11-17 10.55.59

Even the fancy scheduling power of Google Calendar can’t fight Parkinson’s Law, so David Y wrote a bot to help us figure out what meeting rooms are free via HipChat.

pagerduty_oct_hackday_web-1

Our new Sr. PM Chris, along with the UX team, wrote a flashcard deck for the spaced repetition system Anki to help new employees put faces to names.

pagerduty_oct_hackday_web-3

Tim wrote a plugin for Lita (our ChatOps bot) that looks up common security vulnerabilities from different databases, so that we can quickly access this info and show it in our team HipChat rooms.

Finally, Shack put together a guide on how to set up custom Chrome Omnibox searches for internal resources. We’re loading this into our default machine images to help new employees find things.

Screenshot 2014-11-17 11.09.59

 

And that’s the October 2014 Hackday! Want to participate in the next one? Come work with us :)

 

 

Share on FacebookTweet about this on TwitterGoogle+
Posted in Events, Features | Tagged , | Leave a comment

Fostering Diversity in Tech with an Online Community Policy

PagerDuty has a social media policy. Here’s how we developed it, and why.

Shortcut to our Community Policy: http://www.pagerduty.com/community-policy/

At PagerDuty, we’re committed to promoting diversity in technology and fostering innovation from all people, regardless of what might make them unique or different. Creating a culture that truly promotes diversity takes thoughtful work and requires that we reflect on how small things that we do or say can be perceived by others. Unconscious biases (video here) are always at play. Simple things like the snacks and beverages you offer at office events can have unintended perceptions and consequences.

There is a decent amount of awareness around how we can create the right culture in physical spaces like the office workplace and professional conferences. But what about social media? The amount of harassment on social media is surprising and discouraging. We view social media conversations the same way we view person-to-person conversations in the office: inappropriate or harassing comments have no place in our physical spaces, and we don’t want to see them in our online spaces either.

Unfortunately, we had a surprise recently when we posted Twitter ads with pictures showcasing PagerDuty t-shirts. We received a few comments that, we felt, didn’t support the kind of culture we’re trying to create.

When we noticed these comments – the reaction was universal – this was not OK, and we had to do something about it. Everyone here, from the leaders of the company down, wanted to take action. And so the idea of developing an explicit online community policy was born.

We researched what other companies are doing, and were surprised to find a lack of information about how others handle social media harassment. It’s common for organizers of tech conferences to lay out anti-harassment guidelines for attendees; and, of course, most workplaces have internal anti-harassment rules. But there weren’t many examples of companies doing the same for social media.

What we finally based ours on was a template provided by the Geek Feminism wiki. Our policy says that we don’t tolerate harassment, regardless of an individual’s identity or affiliations, and that if we notice such behavior or if a member of our community reports a problem, we can take any action to address it up to and including expulsion. The policy applies to any online space PagerDuty provides, from Twitter and Facebook to our Community Forums.

It may not seem like much. After all, it’s only 160 words on a webpage. But we believe that clearly stating our position on social media harassment and putting a process in place to respond to it will help us reduce and prevent it over time. For one, it’s a tangible way of promoting the culture we want to create. For another, we can now point to clear guidelines and a list of possible consequences if an incident should occur.

So far, we haven’t needed to do anything as drastic as block or expel someone. It only took a couple of days to develop the policy and post it. Once it was up, we approached the people who’d made the problematic Twitter comments — by email, so it didn’t become a public discussion — and asked them to take their comments down. Most of them were surprised, not realizing they’d said anything offensive (which is common — most instances of harassment aren’t intentional). A couple of them pushed back, but we simply thanked them for their feedback and reiterated the request to remove their comments. We’ve had to call on the policy a few times since then as well, but so far straightforward requests have done the trick.

Our hope is to find ways to make the policy more visible to our followers and community members. But in the meantime, it’s giving us the tool we needed to create the kind of community spaces we want.

Questions or comments? Contact us at communities@pagerduty.com.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Announcements | Leave a comment

Reducing your Incident Resolution Time

A little while back, we blogged on key performance metrics that top Operations teams track. Mean time to resolution (MTTR) was one of those metrics. It’s the time between failure & recovery from failure, and it’s directly linked to your uptime. MTTR is a great metric to track; however, it’s also important to avoid a myopic focus.

Putting MTTR into perspective

Your overall downtime is a function of the number of outages as well as the length of each. Dan Slimmon does a great job discussing these two factors and how you may want to think about prioritizing them. Depending on your situation, it may be more important to minimize noisy alerts that resolve quickly (meaning your MTTR may actually increase when you do this). But if you’ve identified MTTR as an area for improvement, here are some strategies that may help.

Working faster won’t solve the problem

It’d be nice if we could fix outages faster simply by working faster, but we all know this isn’t true. To make sustainable, measurable improvements to your MTTR, you need to do a deep investigation into what happens during an outage. True – there will always be variability in your resolution time due to the complexity of incidents. But taking a look at your processes is a good place to start – often the key to shaving minutes lies in how your people and systems work together.

Check out your RESPONSE time

The “MTTR” clock starts ticking as soon as an incident is triggered, and with adjustments to your notification processes, you may be able to achieve some quick wins.

Curious to know how your response time stacks up? We looked at a month of PagerDuty data to understand acknowledgement (response) and resolution times, and how they are are related. The median ack time was 2.82 minutes, and 56% of incidents were acknowledged within 4 minutes. The median resolution time was 28 minutes. For 40% of incidents, the acknowledgement time is between 0-20% of the resolution time.

Median Response Time: 2.82 minutes

Median Resolution Time: 28 minutes

Incident Response Time as % of Resolution Time

If your response time is on the longer side, you may want to look at how the team is getting alerted. Do alerts reliably reach the right person? If the first person notified does not respond, can the alerts automatically be escalated, and how much time do you really need to wait before moving on? Setting the right expectations and goals around response time can help ensure that all team members are responding to their alerts as quickly as possible. 

Establish a process for outages

An outage is a stressful time, and it’s not when you want to be figuring out how you respond to incidents. Establish a process (even if it’s not perfect at first) so everyone knows what to do. Make sure you have the following elements in place:

  1. Establish a communication protocol - If the incident is something more than one person needs to work on, make sure everyone understands where they need to be. Conference calls or Google Hangouts are a good idea, or a single room in Hipchat.
  2. Establish a leader - this is the person who’ll be directing the work of the team in resolving the outage. They’ll be taking notes and giving orders. If the rest of the team disagrees, the leader can be voted out, but another leader should be established immediately.
  3. Take great notes – about everything that’s happening during the outage. These notes will be a helpful reference when you look back during the post mortem. At PagerDuty, some of our call leaders like using a paper notebook beside their laptop as a visual reminder that they should be recording everything.
  4. Practice makes perfect - if you’re not having frequent outages practice your incident response plan monthly to make sure the team is well-versed. Also, don’t forget to train new-hires on the process.

To learn more, check out Blake Gentry’s talk about incident management at Heroku.

Find and fix the problem

Finding out what’s actually going wrong is often the lion’s share of your resolution time. It’s critical to have instrumentation and analytics for each of your services, and make sure that information helps you identify what’s going wrong. For problems that are somewhat common and well understood, you may be able to implement automated fixes. We’ll dive into each of these areas in later posts.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Operations Performance | Leave a comment

Super Charge Data Infrastructure Automation with SaltStack and PagerDuty

Welcome, SaltStack, to the PagerDuty platform! SaltStack is an open source configuration management and remote execution tool that allows you to manage tens of thousands of servers. With the latest PagerDuty integration, you can monitor failures, oversee changes to your infrastructure, keep tabs on system vitals, and manage code deployments. Already mighty on its own, SaltStack has super powers when integrated with PagerDuty.

Monitoring Failures

The most obvious use of the SaltStack and PagerDuty integration is triggering alerts when things break. If your deployment doesn’t go as planned, we’ll let the right person know.

Monitoring Changes

Salt states allow you to declare what state a server should be in, and if it’s not compliant, make the necessary changes to enforce the state. State changes will trigger incidents in PagerDuty which you can acknowledge and triage. With Salt’s “onchanges” requisite, you’re all set.

Monitoring System Vitals

Like Salt states, Salt monitoring states let you define what thresholds your systems should be running at. While monitoring states aren’t designed to make or enforce changes to your deployment, they’ll monitor your systems’ vitals and generate an alert when your system runs outside the bounds that have been configured.

Code Deployment

Salt can also help you manage code deployments across your infrastructure. For example, you may need to stop a web server, deploy your code, then restart the webserver. You’d then want to make sure the web application is still functional after the web server restarts and that the new deployment hasn’t placed stress on the web server. Using Salt and PagerDuty, you can automate this process and be alerted in real time if things don’t go as planned.

Learn More at AWS re:Invent

We’re rallying the troops and heading down to Vegas for AWS re:Invent this week. If you want more information on our newest integration – or for questions, swag or just a chat – visit booth 948. Our buds, SaltStack, will be at K19.

Ready to get started? Check out the guide for setting up SaltStack with PagerDuty.

 

Share on FacebookTweet about this on TwitterGoogle+
Posted in Announcements, Events, Partnerships | Leave a comment

Movember is Upon Us….

pagerduty_movemeber

That’s right…it’s one of our favorite times of the year here at PagerDuty: Movember. Not only do we get to grow awesome mustaches, but we are also supporting a great cause as a work community. There’s something special and exciting about watching an epic ‘stache evolve over the month while supporting and raising awareness for men’s health. For the third year in a row, we’ve joined forces with the Movember Foundation, the leading organization committed to – quite literally – changing the face of men’s health.

How does it work, you ask? PagerDuty “Mo Bros” shave their face on the first day of the month and after that, no mo’. Sure, you can trim and groom the ‘stache however you see fit – some opt for the handlebar, the villain, the sheriff – but the object is to grow it out as long as possible. Of course, we can’t forget about our “Mo Sistas” either! Ladies can show support for the cause by creating buzz around the office or donating to the Movember Foundation.

At the end of the month, we compare “results” and vote on the most epic mustache, and and winner is crowned Mr. Movember. The most enthusiastic gal throughout the month earns the title “Miss Movember.” We actually get quite competitive about it. It’s not uncommon to see a bidding war to shave mid-month between the most threatening, hefty Mo Bro ‘staches. We end the month with a Movember Party where the “Mo Bros” shave the mustaches and we snap after shots. Sounds like fun? ABSOLUTELY. Join our team and track our progress at our Mo Space page. Team name: PagerDuty.

Go Mo Bros, grow!

Share on FacebookTweet about this on TwitterGoogle+
Posted in Events | Tagged | Leave a comment

Who watches the watchmen?

How we drink our own champagne (and do monitoring at PagerDuty)

We deliver over 4 Million alerts each month, and companies count on us to let them know when they have outages. So, who watches the watchmen? Arup Chakrabarti, PagerDuty’s engineering manager, spoke about how we monitor our own systems at DevOps Days Chicago earlier this month. Here are some highlights from his talk about the monitoring tools and philosophies we use here at PagerDuty.

Use the right tool

When it comes to tools, New Relic is one of the tools we use, because it can provide lots of graphs and reports. Application performance management tools give you a lot of information, which is helpful when you don’t really know what your critical metrics are. But they can be hard to customize, and all that information can result in “analysis paralysis.”

PagerDuty also uses StatsD and DataDog monitor key metrics, because they’re easy to use and very customizable, though it can take a little time (we did a half-day session with our engineers) to get teams up to speed on the metrics. SumoLogic analyzes critical app logs, and PagerDuty engineers set up alerts on patterns in the logs. Wormly and Monitis provide external monitoring, though the team did have to build out a smarter health check page that alerts on unexpected values. And, finally, PagerDuty uses PagerDuty to consolidate alerts from all of these monitoring systems and notify us when things go wrong.

Avoid single-host monitoring for distributed systems

“Assume that anything that’s running on a single box is brittle and will break at the most inopportune time,” says Chakrabarti. Rather, PagerDuty sets up alerts on cluster-level metrics, such as the overall number of 500 errors, not the number in a single log file, and overall latency, not one box’s latency. For this reason, PagerDuty tries to funnel all of their systems through the monitoring system rather than feeding data directly from the servers or services into PagerDuty.

We funnel server and service alerts through a highly-available monitoring system so that we alert on the overall impact rather than individual box issues.

We funnel server and service alerts through a highly-available monitoring system so that we alert on the overall impact rather than individual box issues.

Chakrabarti also discusses dependency monitoring, or how to monitor the performance of SaaS systems that you don’t control. There’s no great answer for this problem yet. We do a combination of manual checks and automated pings. As an example, he tells the story of getting a call from a customer who wasn’t getting their SMSes. Upon investigation, it turned out that our SMS provider was sending the messages, but for some reason the wireless carrier was blocking them. As a result, we built out a testing framework, “a.k.a. how to abuse unlimited messaging plans.” Every minute, we send an SMS alert to every major carrier, and measure the response times.

We send SMS messages to the major mobile carriers every minute and measure the response times to make sure we know if the carriers are experiencing issues that may be affecting the deliverability of our SMS alerts

We send SMS messages to the major mobile carriers every minute and measure the response times to make sure we know if the carriers are experiencing issues that may be affecting the deliverability of our SMS alerts

Alert on what customers care about

A lot of people make the mistake of alerting on every single thing that’s wrong in the log, Chakrabarti says. “If the customer doesn’t notice it, maybe it doesn’t need to be alerted on.” But, he warns, the word “customer” can mean different things within the same organization. “If you’re working on end-user things, you’re going to want to monitor on latency. If you’re worried more about internal operations, you might care about the CPU in your Cassandra cluster because you know that’ll affect your other engineering teams.” We have a great blog post on what to alert on if you want to learn more.

Validate that the alerts work

Perhaps the best example of watching the watchers is the fact that “every now and then, you might have to go in manually and check that your alerts are still working,” says Chakrabarti. “We have something at PagerDuty we call Failure Friday, when basically we go in and attack our own services.” The team leaves all the alerts up and running, and proceeds to break processes, the network, and the data centers, with the intent of validating the alerts.

What has the team learned from Failure Friday? “Process monitoring was co-mingled with process running,” Chakrabarti explains. “If the service dies, the monitoring of the service also dies, and you never find out about it until it dies on every single box.” And that, in short, is the reason for external monitoring.

Share on FacebookTweet about this on TwitterGoogle+
Posted in Events, Operations Performance, Reliability | 1 Comment

100 and Counting: Aruba Networks Now a PagerDuty Platform Partner

aruba_logo We’re excited to announce that Aruba Networks has joined PagerDuty’s partner ecosystem, officially marking our 100th platform integration. Big welcome to Aruba, and big thanks to PagerDuty’s community of builders and customers for helping us reach this milestone.

Identify security attacks and whales

Aruba Network’s ClearPass access management system gives companies visibility into activity in their network. By integrating with PagerDuty, ClearPass customers can know about network security issues immediately to reduce customer impact.

“Integrating our solution with PagerDuty’s operations performance platform allows Aruba Networks to provide end-to-end incident management on top of our ClearPass system to give our customers peace of mind knowing incidents will be escalated until someone responds.” – Cameron Esdaile, senior director of emerging technologies at Aruba Networks.

Companies can also find identify VIPs who enter their premises and to triage any log-in issues to deliver great customer experience. Casinos can make sure that their high rollers are happily taken care of from the get-go.

Build on PagerDuty’s Platform

Now with 100 ready-to-use integrations available in our partner ecosystem, the PagerDuty community has a easy, quick way to connect their accounts to other infrastructure, application, and business tools for more seamless operations management. And we’re not stopping here! PagerDuty will continue to actively forge additional integration partnerships. Interesting in building an integration? Let’s talk!

Share on FacebookTweet about this on TwitterGoogle+
Posted in Partnerships | Tagged , | Leave a comment

Blameless post mortems – strategies for success

When something goes wrong, getting to the ‘what’ without worrying about the ‘who’ is critical for understanding failures. Two engineering managers share their strategies for running blameless post mortems.

Failure is inevitable in complex systems. While it’s tempting to find a single person to blame, according to Sidney Dekker, these failures are usually the results of broader design issues in our systems. The good news is that we can design systems to reduce the risk of human errors, but in order to do that, we need to look at the many factors that contribute to failure – both systemic and human. Blameless post mortems, where the goal isn’t to figure out who made a mistake but how the mistake was made, are a tool that can help. While running one is not an easy task, the effort is well worth it. Here, two engineering managers describe some of the challenges and share how they make blameless postmortems successful.

Start with the right mindset

The attitude you take to the discussion is critical and sets the tone for the entire conversation. “You ignore the ‘this person did that’ part,” explains PagerDuty Engineering Manager Arup Chakrabarti. “What matters most is the customer impact, and that’s what you focus on.”

Mike Panchenko, CTO at Opsmatic, says that the approach is based on the assumption that no one wants to make a mistake. “Everyone has to assume that everyone else comes to work to do a good job,” he says. “If someone’s done something bad, it’s not about their character or commitment, it’s just that computers are hard and often you just break stuff.”

Don’t fear failure

Because it’s going to happen. “One thing I always tell my team is that if they’re not screwing up now and then, they’re probably not moving fast enough,” says Chakrabarti. “What’s important is, you learn from your mistakes as quickly as possible, fix it quickly, and keep moving forward.”

Nip blaming in the bud 

There are no shortcuts here. “You have to be very open about saying, ‘Hey, I will not tolerate person A blaming person B,” says Chakrabarti. “You have to call it out immediately, which is uncomfortable. But you have to do it, or else it gives whoever’s doing it a free pass.”

Panchenko agrees: “I’m a pretty direct guy, so when I see that going on, I immediately say ‘stop doing that.'”

That goes for inviting blame, too

“There’s a natural tendency of people to take blame,” says Panchenko. “But a lot of times, there’s the ‘last straw’ that breaks the system.” He describes a recent outage where a bunch of nodes were restarted due to a bug in an automation library. That bug was triggered by the re-appearance of a long-deprecated Chef recipe in the run list. The recipe, in turn, was added back to the runlist due to a misunderstanding about the purpose of a role file left around after a different migration/deprecation. The whole thing took over a month to develop. “Whoever was the next person to run that command was going to land on that mine,” he says, “and usually the person who makes the fatal keystroke expects to be blamed. Getting people to relax and accept the fact that the purpose of the post mortem isn’t to figure out who’s going to get fired for the outage is the biggest challenge for me.”

Handle ongoing performance issues later

It’s natural to be apprehensive about sharing things that didn’t go well when your job performance or credibility may be on the line. The trick is separating ongoing performance issues from “failures” that happen because of shortcomings in your processes or designs.

Panchenko pays attention to the kind of mistake that was made. “Once you see a failure of a certain kind, you should be adding monitoring or safeguards,” he says. “If you’re doing that, the main way someone’s going to be a bad apple is if they’re not following the process. So that’s what I look for: do we have a process in place to avoid the errors, and are the errors happening because the process is being circumvented, or does the process need to be improved?”

And sometimes, yes, you do need to fire people. “I have had scenarios where a single individual keeps making the same mistake, and you have to coach them and give them the opportunity to fix it,” says Chakrabarti. “But after enough time, you have to take that level of action.”

Get executive buy-in 

Both Arup and Mike agree that successful blameless postmortems won’t work without backing from upper-level management. “You have to get top-down support for it,” says Chakrabarti, “and the reason I say that is that blameless postmortems require more work. It’s very easy to walk into a room and say ‘Dave did it, let’s just fire him and we’ve fixed the problem.'” Instead, though, you’re telling the executives that not only did someone on your team cause an expensive outage, but they’re going to be involved in fixing it too. “Almost any executive is going to be very concerned about that,” he says.

“The one thing that’s definitely true is that the tone has to be set at the top,” says Panchenko. “And the tone has to be larger than just postmortems.”

Have you led or participated in blameless post mortems? We’d love to hear more about your experiences – leave us comments below!

Share on FacebookTweet about this on TwitterGoogle+
Posted in Operations Performance, Reliability | 1 Comment

rm –rf “breast cancer”

breast_cancer_awarenessAt PagerDuty, we pride ourselves in supporting the everyday hero, so naturally, we take it upon ourselves to give back to the community. Each year, we’ve actively participated in Movember so this year we decided to unite together to support other causes that we are equally passionate about. One of our beloved employees is a breast cancer survivor so we wanted to rally around this cause by creating more awareness and help support those who are fighting back.

Last Wednesday we celebrated Breast Cancer Awareness Month by putting a spicy spin on our weekly Whiskey Wednesday tradition with pink bubbly and delicious cupcakes. To raise money for the cause, PagerDutonians purchased raffle things for awesome t-shirts and a custom bag. Some even played poker with all proceeds being donated to Breast Cancer charities.

We’re starting to ramp-up our social responsibility and looking for new ways to lend a helping hand. In addition to raising money for breast cancer charities, last Friday a group of us volunteered at a Habitat for Humanity site. Next month, we will celebrate Movember with a mustache-growing competition, and all money raised will go to the Movember Foundation. For the holidays, we plan on hosting another food drive for the SF-Marin County Food Bank. If you have any suggestions for more ways to help us give-back, please feel free to reach out!

Share on FacebookTweet about this on TwitterGoogle+
Posted in Events | Tagged , , | Leave a comment