Twilio launches Flex, a fully programmable contact center

Earlier this year we reported that Twilio was going to launch a full contact center solution called Flex on March 12 — lo and behold, today is March 12 and Twilio today announced the launch of Flex at the Enterprise Connect conference in Orlando. Flex brings together virtually every part of the existing Twilio infrastructure and platform for developers that already power nearly 40 billion interactions a year and bundles it with a rather slick user interface for companies that want to set up an out-of-the-box contact center or update their existing deployments.

Twilio’s expertise has long been in providing backend communications services and its design expertise is mostly in building APIs, not user interfaces. With this move, though, the company is giving enterprises (and this product is meant for the kind of companies that have hundreds or thousands of people in a contact center) a full stack contact center with a full graphical user interface.

As the company’s head of its contact center business Al Cook told me, though, the main design philosophy behind Flex is to give users maximum flexibility. He argues that business today have to choose between going with products that they can’t customize themselves, so that they have to rely on expensive outside vendors that will do the customization for them (which also tends to take a lot of time), or a SaaS contact center that can be quickly deployed but is hard to scale and lacks customization options. “Think of Flex as an application platform,” Cook told me. It takes its cues from Twilio’s experience in working with developers and gives enterprises an easy API interface for customizing the service to their liking, but also provides all of the necessary tools out of the box.

“The reason why APIs were very transformative to the industry is because you are unconstrained in what you can do,” Cook explained. “Once you put a user interface on that, you constrain users.” So for Flex, the team had to ask itself some new questions. “How do you build user interfaces in a fundamentally different way that gives people the best features they want without constraining them?”

Out of the box, Flex supports all of the standard messaging channels that contact centers are now expected to support. These include Voice, video, text, picture messaging, Facebook Messenger, Twitter, LINE and WeChat. The service also supports screen sharing and code browsing. Twilio is also integrating its own intelligent TaskRouter service into Flex to automatically route questions to the right agent. A single Flex deployment can support up to 50,000 agents.

Cook argues that getting started with Flex is a one-click affair, though once it’s up and running, most users will surely need to customize the service a bit for their own needs and embed chat widgets and other functions on their websites and into their apps (think click-to-call, for example). Some of the more in-depth customization can be done in Twilio Studio, the company’s drag and drop application builder, too.

Most large enterprises already have contact centers, though, so it’s maybe no surprise that some of the thinking behind making Flex as… well… flexible as possible is about giving those users the ability to mix and match features from Flex with their existing tools to allow for a slow and steady migration.

As we reported last month, Flex will also integrate with all the standard CRM tools like Salesforce and Zendesk, as well as workforce management and optimization tools that are currently in use in most contact centers.

Before launching the product today, Twilio already worked with ING, Zillow, National Debt Relief and RealPage to test Flex. In addition, it lined up a number of tech and consulting partners to support new users.

Featured Image: Drew Angerer/Getty Images

These hackers put together a prosthetic Nerf gun

How can you participate in a Nerf gun fight if you’re missing a hand? The ingenious Hackerloop collective of tinkerers solved that problem by putting together a prosthetic Nerf gun that you can control with your arm muscles.

In other words, Nicolas Huchet became Barret Wallace from Final Fantasy VII for a day. And here’s what it looks like:

[embedded content]

Let’s look at the device more closely. In particular, Hackerloop had to find a way to replace the trigger on the Nerf gun with another firing gesture.

The base gun is a Swarmfire Nerf blaster without the handle. Thanks to some 3D printing, Huchet could wear the device as a prosthetic extension of his right arm — it’s a custom-made casing.

The Nerf gun is then connected to an Arduino-like microcontroller that activates the gun on demand. And finally, Huet is wearing three electrodes near his elbow. If he contracts his muscle, the electrodes send the electrical activity to the microcontroller.

If the voltage reaches a certain level, the microcontroller fires the Nerf gun. And of course, Huchet played around with it in the streets of Paris. Pretty neat!

In the past, Hackerloop has worked on other creative hacks. The team built a replica of the house in “Up” using paper and foam and sent it above Paris to post photos on Instagram using a Raspberry Pi.

They also worked on the Nosulus Rift, a VR fart simulator to promote Ubisoft’s South Park game (The Fractured But Whole). Every time you fart in the video game, the Nosulus Rift emits a farting smell.

I tried it myself and it really stinks.

Google promises publishers an alternative to AMP

Google’s AMP project is not uncontroversial. Users often love it because it makes mobile sites load almost instantly. Publishers often hate it because they feel like they are giving Google too much control in return for better placement on its search pages. Now Google proposes to bring some of the lessons it learned from AMP to the web as a whole. Ideally, this means that users will profit from Google’s efforts and see faster non-AMP sites across the web (and not just in their search engines).

Publishers, however, will once again have to adopt a whole new set of standards for their sites, but with this, Google is also giving them a new path to be included in the increasingly important Top Stories carousel on its mobile search results pages.

“Based on what we learned from AMP, we now feel ready to take the next step and work to support more instant-loading content not based on AMP technology in areas of Google Search designed for this, like the Top Stories carousel,” AMP tech lead Malte Ubl writes today. “This content will need to follow a set of future web standards and meet a set of objective performance and user experience criteria to be eligible.”

AMP, in many ways, is a hack on top of the web that uses a smart combination of modern web technologies, iframes, a stripped-down set of markup, and proxies to speed up pages. Now, with new standards like Web Packaging (you can think of it as a ZIP file for web content), Feature Policy (so developers can turn certain browser features on and off at will), Paint Timing and others, developers have tools to speed up their sites even more.

Google says it wants to highlight sites that use these technologies to feature non-AMP sites in its Top Stories carousel when and if they meet its performance and user experience criteria and implement these new standards. The AMP team stresses that Web Packaging, in particular, will allow it to instant-load pages outside of AMP.

To some degree, these new technologies clear a path for publishers who want to be featured in Google’s Top Stories carousel on mobile without having to use AMP. For now, though, the AMP team is also pretty clear about the fact that it can’t provide publishers a timeline for when Google itself will start implementing these changes. Many of the standards, after all, are still in flux.

Unsurprisingly then, Google continues to recommend that publishers bet on AMP for the time being. “We hope this work will also unlock AMP-like embeddability that powers Google Search features like the Top Stories carousel,” writes Ubl. “Meanwhile, AMP will be Google’s well-lit path to creating great user experiences on the web. It will be just one of many choices, but it will be the one we recommend.”

With AMP still being the recommended solution, though, some critics will surely wonder whether Google is simply interested in building and highlighting this second path to deflect people’s criticism of the AMP project. Having talked to the AMP team in the past, I’m sure that’s not the goal here, but given the community’s suspicions of AMP, that’s something the team will likely have to address sooner or later.

Featured Image: Alexandre Amaral / EyeEm/Getty Images

CodeStream wants to move developer chat straight to the source code

There are tons of services out there from Slack to Jira that are designed to help developers communicate with one another about code issues, but there is a surprising dearth of tools that have been purpose-built to provide communication capabilities right in the IDE where developers work. CodeStream, a member of the Y Combinator Winter 2018, aims to fix that.

“We are team chat in your IDE and make it easier for developers to talk about code where they code,” company co-founder and CEO Peter Pezaris explained. He says that having the conversation adjacent to the code also has the advantage of creating a record of the interactions between coders, and they could learn from that over time, while making it easier to on-board new developers to the team.

Unlike many YC companies, Pezaris and his co-founder and COO Claudio Pinkus have more than 20 years of experience building successful companies. They say the idea for this one came from a problem they experienced over and over around developer communication. “CodeStream is story of scratching your own itch. We work as developers and my contention is that people tend to work too much in isolation,” Pezaris said.

Developers can go into Github and see every line of code they ever created back to the very start of the project, but conversations around that code tend to be more ephemeral, he explained. While the startup team uses Slack to communicate about the company, they saw a need for a more specific tool built right inside the code production tool to discuss the code.

  1. CodeStream – code comment

  2. CodeStream – merge conflict

If you’re thinking that surely something like this must exist already, Pezaris insists it doesn’t because of the way IDEs were structured until recently. They weren’t built to plug in other pieces like CodeStream. “You would be shocked how developers are sharing code,” he said. He spoke to team recently that took pictures of the code snippets with their mobile phones, then shared them in Facebook Messenger to discuss it.

A big question is why an experienced team of company builders would want to join Y Combinator, which is typically populated by young entrepreneurs with little experience looking for help as they build a company. The CodeStream team had a different motivation. They knew how to build a company, but having spent the bulk of their professional lives in New York City, they wanted to build connections in Silicon Valley and this seemed like a good way to do it.

They also love the energy of the young startups and they are learning from them about new tools and techniques they might not have otherwise known about, while also acting as mentors to the other companies given their atypical depth of experience.

The company launched last June. They eventually will charge a subscription fee to monetize the idea. As for what motivated them to start yet another company, they missed working together and the rush involved in building a company. “I took two years off after the sale of my previous business, and I got the itch. I feel better and happier when I’m doing this,” Pinkus said. “It’s the band. We got it back together,” says Pezaris.

[embedded content]

Featured Image: PeopleImages/Getty Images

Here’s the first developer preview of Android P

Just like in the last two years, Google is using the beginning of March to launch the first developer preview of the next version of Android. Android P, as it’s currently called, is still very much a work in progress and Google isn’t releasing it into its public Android beta channel for over-the-air updates just yet. That’ll come later. Developers, however, can now download all the necessary bits to work with the new features and test their existing apps to make sure they are compatible.

As with Google’s previous early releases, we’re mostly talking about under-the-hood updates here. Google isn’t talking about any of the more user-facing design changes in Android P just yet. The new features Google is talking about, though, will definitely make it easier for developers to create interesting new apps for modern Android devices.

So what’s new in Android P? Since people were already excited about this a few weeks ago, let’s get this one new feature out of the way: Android P has built-in support for notches, those display cutouts Apple had the courage to pioneer with the iPhone X. Developers will be able to call a new API to check what whether a device has a cutout and its dimensions and then request full-screen content to flow around it.

While Google isn’t talking much about user-facing feature, the company mentions that it is once again making changes to Android notifications. This time around, the company is focussing on notifications from messaging apps and it’s giving developers a couple of new tools for highlighting who is contacting you and giving developers the ability to attach photos, stickers and smart replies to these notifications.

A couple of new additions to the Android Autofill Framework for developers who write password managers will also make life a bit easier for users, though right now, the focus here is on better dataset filtering, input sanitization and a compatibility mode that will allow password managers to work with apps that don’t have built-in support for Autofill yet.

While Google isn’t introducing any new power-saving features in Android P (yet), the company does say that it continues to refine existing features like Doze, App Standby and Background Limits, all of which it introduced in the last few major releases.

What Google is adding, though, is new privacy features. Android P will, for example, restrict access to the microphone, camera and sensors from idle apps. In a future build, the company will also introduce the ability to encrypt Android backups with a client-side secret and Google will also introduce per-network randomization of associated MAC address, which will make it harder to track users. This last feature is still experimental for now, though.

One of the most interesting new developer features in Android P is multi-camera API. Since many modern phones now have dual front or back cameras (with Google’s own Pixel being the exception), Google decided to make it easier for developers to access both of them with the help of a new API to can call a fused camera stream that can switch between two or more cameras. Other changes to the camera system are meant to help image stabilization and special effect developers build their tools and to reduce the delays during initial captures. Chances are, then, that we’ll see the more Frontback-style apps with the release of Android P.

On the media side, Android P also introduces built-in support for HDR VP9 Profile 2 for playing HDR-enabled movies on devices with the right hardware, as well as support for images in the increasingly popular High Efficiency Image File Format (HEIF), which may just be the JPEG-killer the Internet has been searching for for decades (and which Apple also supports). Developers now also get new and more efficient tools for handling bitmap images and drawables thanks to ImageDecode, a replacement for the current BitMapFactory object.

Indoor positioning is also getting a boost in Android P thanks to support for the IEEE 802.11mc protocol, which provides information about WiFi round-trip time, which in turn allows for relatively accurate indoor positioning. Devices that support this protocol will be able to locate a user with an accuracy of one to two meters. That should be more than enough to guide you through a mall or pop up an ad when you are close to a store, but Google also notes that some of the use cases here include disambiguated voice controls.

Once you are in that store in the mall and want to pay, Android P now also supports the GlobalPlatform Open Mobile API. That name may evoke the sense of green meadows and mountain dew, but it’s basically the standard for building secure NFV-based services like payments solutions.

Developers who want to do machine learning on phones are also in luck, because Android P will bring a couple of new features to the Neural Networks API that Google first introduced with Android 8.1. Specifically, Android P will add support for nine operations: Pad, BatchToSpaceND, SpaceToBatchND, Transpose, Strided Slice, Mean, Div, Sub, and Squeeze.

But wait, there is more. Now that Kotlin is a first-class language for Android development, Google is obviously optimizing its compiler and for all apps, Google is also promising performance and efficiency improvements in its ART runtime.

Clearly, this is one of the more meaningful Android updates in recent years. It’s no surprise then that Google is only making images available to developer right now and that you won’t be able to get this version over the air just yet. Like with previous releases, though, Google does plan to bring Android P to the Android beta channel (Google I/O is about two months away, so that may be the time for that). As usual, Google will likely introduce a couple of other new features over the course of the beta period and at some point, it’ll even announce the final name for Android P…

Featured Image: Bryce Durbin

Microsoft wants to help developers bring their AI models to the desktop

Microsoft today announced its AI Platform for Windows developers, a new set of tools that will soon help developers to bring the machine learning models they trained in the cloud to their desktop apps. The AI platform in Windows 10, which will launch with the next major version of Windows, will make use of the GPU on your local machine and allows developers to run their models in real time and without the need for a round trip to the cloud.

“What we are building really completes the story for Microsoft from an AI perspective,” Microsoft Partner Group program manager Kam VedBrat told me. In the past, Microsoft talked a lot about its machine learning infrastructure in the cloud and the tooling it built around this. With this, developers can now easily build their models in the cloud, using their framework of choice and then easily integrate these models with their desktop apps, using Visual Studio and some of the other tooling Microsoft is building for this.

At the core of this is Onnx, a project that is backed by Microsoft, Facebook and Amazon. It allows developers to convert Caffe2, PyTorch, CNTK and other models into the Onnx format to move them between frameworks as necessary.

Microsoft will also allow developers to build image recognition models with the Azure Custom Vision Service and export them for use in Windows ML. Unlike working with traditional framework, a developer doesn’t need to know about the intricacies of building machine learning models to do this. All they need to do is give the service their tagged training data.

These models then make use of the silicon that’s available to them in any given machine, which most likely means a DirectX 12 graphics card or, if that’s not available, the CPU. But the platform will also offer a flexible API for accessing other hardware, including future Intel Movidius vision processing units, for example.

The advantage here, Microsoft corporate VP Kevin Gallo told me, is not just lower latency and increased privacy for  your users’ data, but also cost. Running these models in the cloud does, after all, incur a cost that can quickly add up. When they run on the desktop, though, that’s a non-issue.

Starting with the next preview of Visual Studio 15.7, developers can simply add an ONNX file to their Universal Windows Platform (UWP) apps and Visual Studio will generate a model interface for the project. Microsoft will also make tooling for previous versions of Visual Studio available and it’ll add this capability to the Visual Studio tools for AI, too.

Featured Image: James D. Morgan/Getty Images

The day that changed your phone forever

Whether you’re a developer who’s working on mobile apps, or just someone enjoying the millions of apps available for your phone, today is a very special day.

It’s the 10-year anniversary of the original iPhone SDK. I don’t think it’s an understatement to say that this release changed a lot of people’s lives. I know it changed mine and had a fundamental impact on this company’s business. So let’s take a moment and look back on what happened a decade ago.

There are a lot of links in this piece, many of which were difficult to resurrect on today’s web. Make sure you take the time to explore! I’ve also tried to avoid technical jargon, so even if you don’t know your Swift from a hole in the ground, you can still follow along.

Touching the Future

For many of us, holding that first iPhone at the end of June 2007 was a glimpse of the future. We all wanted to know what was inside the glass and metal sitting in our pockets.

Apple had told us what the device could do but said very little about how it was done. We didn’t know anything about the processor or its speed, how much memory was available, or how you built apps. In many ways, this new device was a black, and silver, box.

As developers, we wanted to understand this device’s capabilities. We wanted to understand how our software design was about to change. We were curious and there was much to learn.

And learn we did. We called it Jailbreaking.

[embedded content]

Breaking out of jail

Discoveries happened quickly. It took just a matter of weeks before the filesystem was exposed. A couple of months later, the entire native app experience was unlocked. Development toolchains were available and folks were writing installers for native apps.

The first iPhone app created outside of Apple.

This rapid progress was made possible thanks to the tools used to build the original iPhone. Apple relied on the same infrastructure as Mac OS. They chose a familiar environment to expedite their own development, but that same familiarity allowed those of us outside Cupertino to figure things out quickly.

Hello world.

For example, much of the software on the iPhone was created using Objective-C. Mac developers had long used a tool called class-dump to show the various pieces of an app and learn how things communicated with each other. After getting access to the first iPhone’s apps and frameworks, this software gave great insight into what Apple had written.

The most important piece was a new thing called UIKit. It contained all the user interface components, like buttons and table views. Since they were similar to the ones we’d used on the Mac, it took little effort to make items for taps and scrolling.

Another important piece of the puzzle was the operating system: Unix. This choice by Apple meant that a lot of open source software was immediately available on our iPhones. We could use it to build our apps, then copy them over to the phone, and, most likely, view the content of LatestCrash.plist in /var/logs/CrashReporter.

I distinctly remember the first time I got a shell prompt on my iPhone and used uname to see the system information. I was home.

Early app development

I was not alone. Thousands of other developers were finding that the inside of this new device was just as magical as the outside. It shouldn’t come as a surprise to hear that there was an explosion of iPhone app development.

One of the pivotal moments for the burgeoning community came at an independent developer conference called C4[1]. Held in August 2007, many of the attendees had the new device and were discovering its capabilities. Most of us were also experienced Mac developers. We had just been to WWDC and heard Apple’s pitch for a “sweet solution”.

Amid this perfect storm, there was an “Iron Coder” competition for the “iPhone API”. The conference organizer, Jonathan “Wolf” Rentzsch, asked us to “be creative”. We were.

My own submission was a web app that implemented a graphing calculator in JavaScript. It epitomized what we all disliked about Apple’s proposal a few months earlier: a clunky user interface that ran slowly. Not the sandwich most of us were hoping for…

On the other hand, the native apps blew us away. The winner of the contest was a video conferencing app written by Glen and Ken Aspeslagh. They built their own front-facing camera hardware and wrote something akin to FaceTime three years before Apple. An amazing achievement considering the first iPhone didn’t even have a video camera.

[embedded content]

But for me, the app that came in second place was the shining example of what was to come. First, it was a game and, well, that’s worked out pretty well on mobile. But more importantly, it showed how great design and programming could take something from the physical world, make it work seamlessly on a touch screen and significantly improve the overall experience.

Lucas Newman and Adam Betts created the Lights Off app a few days before C4. Afterwards, Lucas helped get me started with the Jailbreak tools, and at some point he gave me the source code so I could see how it worked. Luckily, I’m good at keeping backups and maintaining software: your iPhone X can still run that same code we all admired 10 years ago!

Lucas Newman presenting Lights Off at C4[1]. (Source: John Gruber)

If you’re a developer who uses Xcode, get the project that’s available on GitHub. The project’s Jailbreak folder contains everything Lucas sent me. The Xcode project adapts that code so it can be built and run – no changes were made unless necessary. It’s much easier to get running than the original, but please don’t complain about the resolution not being 1-to-1.

In the code you’ll see things like a root view controller that’s also an application delegate: remember that we were all learning how to write apps without any documentation. There’s also a complete lack of properties, storyboards, asset catalogs, and many other things we take for granted in our modern tools.

If you don’t have Xcode, you’re still in luck. Long-time “iPhone enthusiast” Steve Troughton-Smith sells an improved version on the App Store. I still love this game and play it frequently: Its induction into iMore’s Hall of Fame is well-deserved.

Now I was armed with tools and inspiration. What came next?

The Iconfactory’s first apps

The first version of Twitterrific on the iPhone. And pens. And slerp.

In June 2007, we had just released version 2.1 of our wildly popular Mac app for Twitter. It should have be pretty easy to move some Cocoa code from one platform to another, right?

The first version of Twitterrific on the iPhone. And pens. And slerp.

Not really. But I was learning a lot and having a blast. The iPhone attracted coders of all kinds, including our own Sean Heber. In 2007, Sean was doing web development and didn’t know anything about Objective-C or programming for the Mac. But that didn’t stop him from poking around in the class-dump headers with the rest of us and writing his first app.

But he took it a step further with a goal to write an app for every day of November 2007 (inspired by his wife doing NaNoWriMo.) He called it iApp-a-Day and it was a hit in the Jailbreak community. The attention eventually landed him a position at Tapulous, alongside the talented folks responsible for the iPhone’s first hit franchise: Tap Tap Revenge.

Over the course of the month, Sean showed that the iPhone could be whatever the developer wanted it to be. Sure, it could play games, but it could also keep track of your budgetplay a tune, or help you hang a painting.

Screenshots from Sean Heber’s iApp-a-Day.

Both Sean and I have archives of the apps we produced during this period. The code is admittedly terrible, but for us it represents something much greater. Reading it brings back fond memories of the halcyon days where we were experimenting with the future.

There were a lot of surprises in that early version of UIKit. It took forever to find the XML parser because it was buried in the OfficeImport framework. And some important stuff was completely missing: there was no way to return a floating point value with Objective-C.

There were also strange engineering decisions. You could put arbitrary HTML into a text view, which worked fine with simple tags like <b>, but crashed with more complex ones. Views also used LKLayer for compositing, which was kinda like the new Core Animation in Mac OS Leopard, but not the same. Tables also introduced a new concept called “cell reuse” which allowed for fast scrolling, but it was complex and unwieldy. And it would have been awesome to have view controllers like the ones just released for AppKit.

But that didn’t stop us from experimenting and learning what we could do. And then something happened: we stopped.

A real SDK

Apple had worked its butt off to get the iPhone out the door. Those of us who were writing Jailbreak apps saw some warts in that first product, but they didn’t matter at all. Real artists ship. Only fools thought it sucked.

Twitterrific’s design at the App Store launch.

Everyone who’s shipped a product knows that the “Whew, we did it!” is quickly followed by a “What’s next?”

Maybe the answer to that question was influenced by all the Jailbreaking, or maybe the managers in Cupertino knew what they wanted before the launch. Either way, we were all thrilled when an official SDK was announced by Steve Jobs, a mere five months after release of the phone itself.

The iPhone SDK was promised for February of 2008, and given the size of the task, no one was disappointed when it slipped by just a few days. The release was accompanied by an event at the Town Hall theater.

Ten years ago today was the first time we learned about the Simulator and other changes in Xcode, new and exciting frameworks like Core Location and OpenGL, and a brand new App Store that would get our products into the hands of customers. Jason Snell transcribed the event for Macworld. There’s also a video.

Our turn to be real artists

After recovering from all the great news, developers everywhere started thinking about shipping. We didn’t know exactly how long we would have, but we knew we had to hustle.

Winning an Apple Design Award. Thank you. (Source: Steve Weller)

In the end, we had about four months to get our apps ready. Thanks to what The Iconfactory learned during the Jailbreak era, we had a head start understanding design and development issues. But we still worked our butts off to build the first iPhone’s Twitter app.

Just before the launch of the App Store, Apple added new categories during its annual design awards ceremony. We were thrilled to win an ADA for our work on the iPhone.

How thrilled? The exclamation I used while downloading the new SDK was the same as getting to hold that silver cube.

After that, we were among the first apps to be featured in the App Store and ranked high in the early charts.

We knew we were a part of something big. Just not how big.

The journey continues

The second version of Twitterrific and some guy.

The Iconfactory’s first mobile app entered a store where there were hundreds of products. There are now over 2 million.

We now sell mobile apps for consumers and tools for the designers & developers who make them.

We now do design work for mobile apps at companies largemedium, and

We now develop mobile apps for a select group of clients. (Get in touch if you’d like to be one of them.)

A lot can happen in a decade.

But one thing hasn’t changed. Our entire team is still proud to be a part of this vibrant ecosystem and of the contributions we make to it. Here’s to another decade!