Tackling Hardware & Software Challenges to Manage A Growing Community
WeWork provides space, community and services so our members can make a life, not just a living. With members in 15 cities, and many more to come, our community is global. The Digital department at WeWork is 40 people strong and builds products that allow us to seamlessly manage our growing business, innovate in the physical space, and provide tools and services to our members to help them connect, grow, and ultimately become more successful.
Most of our web stack’s code-base is Ruby on Rails, but we have a microservices architecture, and some of our services use Node.js. We also write a good amount of Swift/Objective-C and Java for our mobile platforms. We’re always trying to balance between experimenting with new technologies and having a consistent environment. This is especially evident on the client side, where we’ve worked with several platforms, including React, Angular, and just straight up jQuery. We love all of them, but recently we’ve been leaning towards reusable components built in React.
Our major DevOps objective is a transition from a PaaS to a fully containerized AWS implementation. A suite of internal tools are being developed, so it’s not uncommon to see and write Ruby, Python, Bash, and Go all in the same day.
Connecting Members with Their Physical WeWork Space
What’s really great about what we do is that we get to work on so many different and interesting domains. From hardware integrations, through complex backend, and on to swanky frontend.
One of my favorites is connecting our members with the physical aspects of our buildings. The first of every month is move-in day, a very exciting time for our members and our team. That day, thousands of members walk into their new office in the morning and are greeted by their Community Manager, who hands them their WeWork key card. That card allows them to gain access to their building 24/7 and other locations around the world upon booking workspace or conference rooms through our web interface or iOS and Android apps, plus allows them to collect benefits from hundreds of partners.
While there are off-the-shelf access systems out there, most of them were designed for either enterprises and their employees, or hotels and short-term guests. The WeWork model is different from both. For example, your hotel key card is programmed to open your room and some common areas, and doesn’t do anything else. Your WeWork card, however, is directly tied to who you are. This means we have to keep our own data layer on top of a 3rd party system, keep them in sync, and gracefully handle downtime in any part of the chain and across different networks. An RFID card reader being down, or the hubs losing network, should not affect our app servers, but it should propagate the failure information to the right people so we can immediately take action.
We ended up building a service that aggregates the information from all the readers in all of our buildings, and sends it over a message bus to our servers, in a way that lets them listen in on specific buildings, reader type, etc. This service also abstracts away the complexity of communicating with hardware in different physical locations, so that when members travel to a different location, they don’t have to wait in line like the rest of us. Simply scan your card, and your information pops up on the Community Manager’s WeWork dashboard (also built internally).
Making WeWork.com Performant
Another interesting project we’ve recently tackled is part of a concentrated effort around performance, specifically on our public facing website: wework.com. One interesting aspect of wework.com is that it’s tightly integrated into our internal business systems; the main call to action on the site is to book a tour in one of our locations, and the physical experience of touring, the process that follows and the downstream flow of sending an e-contract and moving people in, are all part of one flow. So for example, we when we compare companies within WeWork that are growing at different paces, years after joining, we can trace them back to the source URL and web journey they went through before they even booked the tour.
Our front page is obviously not as dynamic as our Member Network, for example, but we use Rails to serve both backend and frontend, which leads to some challenges when you need to make a zippy first impression.
The most ideal situation is being able to cache the entire contents of the HTML at the CDN level, hence significantly reducing the first-byte time and also the load on your servers. The biggest challenge with this approach is figuring out a way to identify users based on their session for the purposes of geolocation and A/B experimentation. A lot of the view logic needed to be moved from our Rails views into the client side, while still maintaining basic session data and Rails’ inherent form data forgery protection using authenticity tokens.
<link href=”application.css” rel=”stylesheet”>
This tiny JS file does a few things. It first sets up the user’s session so that we can identify the user when running experiments and whatnot. It also retrieves some other data that we were originally putting into our Rails application layout. Lastly, it retrieves geo-specific information from the user’s request based on their IP from a local MaxMind data store. The final piece to make all of this work with our Ajax calls and asynchronous forms was to properly pass in the CSRF token generated by Rails per user session. This is done using the same concept as above, where we make a small request to the server and update the proper <meta> tags needed for our forms and Ajax calls to play nicely with the Rails back-end. Fastly has a nice blog post about this technique, which we used as inspiration when implementing our own.
So how did we do? We will let the numbers speak for themselves!
Now, keep in mind that these are “load test” numbers, meaning that we are doing a simple curl to wework.com and just measuring the response from the server. This does not include assets being loaded and browser processing. Even so, we have seen a huge boost in our load capacity and can breathe a little easier the next time our CEO steps up on stage at a major event and talks about us :)
Subscribe to our newsletter
We have a weekly publication called Ruff Notes where we share original content, curated articles, and news from our community.