Amazon Rekognition Video gives developers access to real-time video analysis


Amazon’s AWS division today expanded its line-up of pre-trained machine learning tools with the launch of Amazon Rekognition Video. This new service works for both batch uploads and even real-time video streams, which gives Amazon a leg up on similar services from some of its competitors.

With Rekognition, AWS already offered a tool for analyzing static images and getting data about them out of those image files. With this video version, developers can now automatically get information about the objects in a video, the scenes they are set in and the activities that are happening in them. The service also includes support for person detection and recognition (and it’s pre-trained to recognize celebrities). It can also track people through a video and filter out potentially inappropriate content.

Recognizing objects in people and videos is quite a bit harder than doing the same thing in images. Over the course of the last year or so, we’ve seen Google and others focus on video recognition, though, and that work is now paying off in the form of services like Rekognition, which can automatically generate metadata from video files.

Amazon introduces an AWS graph database service called Amazon Neptune


Amazon is in the middle of its AWS Re:invent keynote right now, and the company just announced a brand new database service. Amazon Neptune has been specifically designed for relationship graphs. So if you’re thinking about building a social network feature, Neptune can help you.

The issue with traditional relational databases is that they’re not made for complex social graphs with complicated lists of friends and followers. By default, you have to run demanding database queries to list the friends you have in common for instance.

So you can either throw more coal into the engine or you can optimize your database. Amazon Neptune has been optimized to handle billions of relationships and run queries within milliseconds. Neptune supports fast-failover, point-in-time recovery and Multi-AZ deployments. And you can also encrypt data at rest.

Amazon is trying to use existing technologies to interact with Neptune. The database service supports graph models Property Graph and W3C’s RDF and their query languages Apache TinkerPop Gremlin and SPARQL.

Graph databases can be useful beyond social networks and dating apps. You can use a graph database for recommendation engines, logistics, genomic sequencing and more. Let’s see if big clients are going to switch to Amazon Neptune in the coming months. More details here.

Featured Image: Hoxton/Tom Merton/Getty Images

AWS’s container service gets support for Kubernetes


AWS today announced its long-awaited support for the Kubernetes container orchestration system on top of its Elastic Container Service (ECS).

Kubernetes has, of course, become something of a de facto standard for container orchestration. It already had the backing of Google (which incubated it), as well as Microsoft and virtually every other major cloud player. So AWS is relatively late to the party here but it does already have over 100,000 active container clusters on its service and that these users spin up millions of containers already.

AWS’s users are clearly interested in running containers and indeed, many of them already ran Kubernetes on top of AWS, but without the direct support of AWS. But with this new service, AWS will manage the container orchestration system for its users. ECS for Kubernetes will support the latest versions of Kubernetes and AWS will handle upgrades and all of the management of the service and its clusters.

AWS CEO Andy Jassy noted that when the company launched ECS, there wasn’t really a container orchestration system like Kubernetes. He stressed that ECS is deeply integrated with the rest of the AWS platform and that it scaled “in a much broader way than other container services.”

AWS launches bare metal instances


AWS, Amazon’s cloud computing division, today announced its long-awaited bare metal instances.

With bare metal, you get direct access to the hardware and access to virtually 100 percent of the hardware’s resources without any major overhead. They also allow AWS users to run their own virtualization stacks, which also gives them more control over their cloud servers.

AWS is launching these bare metal instances as part of its i3 instance family but expects to bring them to a wider range of instance families over time.These new instances are now going into public preview but developers will have to sign up for this preview.

The bare metal instances will still be able to make use of all the usual EC2 services, AWS’s VP of global infrastructure Peter Desantis noted in a keynote at the company’s re:Invent developer conference.

One area Desantis talked quite a bit about in the context of this announcement was Amazon’s recent work on custom chips. A few years ago, AWS decided that it wanted to modernize the architecture of the EC2 platform. That meant moving the networking and storage stack to a new platform which AWS dubbed the “Nitro Architecture.” To do this, AWS acquired Ananpurna Labs and used that company’s expertise to built custom chips that allowed it to move much of what it was previously doing in software to a dedicated — and significantly faster — hardware platform. It also built its own hypervisor based on KVM.

Before building custom silicon, Desantis argued, you have to “be really sure that you have a problem that merits this investment and a scale that merits it before you go down this path.” AWS clearly felt that building its own silicon over FPGAs made sense for its use case and nobody is going to argue about AWS’ scale, after all.

Veem opens up global payments platform to developers with new API


If you’ve ever tried to do business across borders, you know how painful it can be to send a wire transfer, wait for the payment to clear the bank and pay a set of fees along the way. Veem is a startup trying to simplify all of that for SMBs by providing a platform to ease the international transfer of funds between businesses. Today, it announced it was opening up that capability to developers in the form of an API.

Up until now, users could go to the Veem website, or they could use the direct integration inside Quickbooks, Xero and Netsuite, three accounting platforms that are popular with SMBs (small to medium sized businesses).

To use Veem, the parties involved simply sign up online on the Veem website to create an account. Instead of requiring a ton of information to make the transfer, all you need is an email. You enter the amount and the email of the recipient and Veem handles the cross-border transfers and makes money on the exchange rate.

It’s worth noting that general foreign exchange fees are typically around 4 percent with bank wire transfers. Depending on the transaction, Veem takes between a half and 2 percent on exchange, making it a much cheaper and easier option for small businesses to use.

When small businesses make these kinds of international monetary transfers, there are also rules and regulations to deal with, which Veem can also handle including bills of sale or lading and other details the receiving bank may require.

This is really where opening this up to other platforms other than the three accounting packages could shine. It enables programmers to add international payments to any app by connecting to the Veem API.

This not only opens up this capability to developers, greatly simplifying something that’s actually quite complicated, it provides a way for Veem to appear inside more applications without dealing with the integration themselves. It also provides a way to build Veem usage without having users go directly to the Veem site.

The company does not charge developers for connecting to Veem, making it an attractive option for programmers. Instead, it continues to make money off of the exchange rate fees

Veem launched in 2014 and has raised over $44 million.

Featured Image: Rawpixel/Getty Images

PacketZoom lands $5M Series A investment to speed up mobile apps


PacketZoom, a startup that helps app developers speed up and optimize app delivery on mobile devices, announced a $5M Series A today.

The round was led by Baseline Ventures with participation from First Round Capital, Tandem Capital and Arafura Ventures. Today’s investment brings the total raised to over $9M, according to Crunchbase.

The company combines a content delivery network (CDN) to speed up performance with an application performance management tool to identify performance issues in a single package. Instead of making content faster from a web delivery standpoint, it’s finding ways to speed up app performance on your mobile device. In fact, research has shown that users have very little patience when it comes to apps that are buggy or have performance issues.

Unlike a Web CDN, which cannot see what’s happening on end user devices, PacketZoom has insight into activity on the device and inside the cellular networks, company CEO Shlomi Gian explained. He says that they offer an SDK for free to developers, which gives developers analytics about the app along with network-related performance issue alerts. This is information that also helps PacketZoom understand the vagaries of the device/network connection and the kinds of problems that occur as the app interacts with the network.

The company has a second product, Mobile Expresslane from which it earns revenue. It promises to optimize the app delivery and downloads content 2-3 times faster, while reducing network errors. Developers pay-per-volume pricing for this product.

One way they do this, Gian says, is by eliminating a lot of the errors related to network timeouts. The TCP network protocol was designed to slow down traffic when it saw congestion on the network, a perfectly logical approach when it was created, but not so much in a mobile context. “On wireless, you always lose packets, so we let the server know it didn’t receive a packet and it will do it on the next round,” Gian explained.

He says this helps eliminate a lot of the network errors related to timeouts, which can happen frequently on mobile as you switch from WiFi to cellular or move between mobile networks.

The company started in 2013 and it took several years to build the product, which was released in 2016. They got their seed round in 2015 before securing the Series A they are announcing today.

In the 18 months since they launched the first product, the company has 68 paying customers including Sephora, Glu Mobile and East Side Games.

The funding round comes on the heels of Cloudflare buying Neumob, one of PacketZoom’s competitors earlier this month.

Featured Image: PeopleImages/Getty Images

Overclock Labs bets on Kubernetes to help companies automate their cloud infrastructure


Overclock Labs wants to make it easier for developers to deploy and manage their applications across clouds. To do so, the company is building tools to automate distributed cloud infrastructure and unsurprisingly, it is betting on containers — and specifically the Kubernetes container orchestration tools — to do this.

Today, Overclock Labs, which was founded two years ago, is coming out of stealth and announcing that it raised a $1.3 million seed round from a number of Silicon Valley angel investors and CrunchFund — the fund that shares a bit of its name and history with TechCrunch but is otherwise completely unaffiliated with the blog you are currently reading.

So far, the company used this previously undisclosed funding round to develop DISCO, which stands for Decentralized Infrastructure for Serverless Computing Operations. You may see the word “serverless” in there and think: so this is like an event-driven service like AWS Lambda or Azure Functions? And nobody would blame you for thinking that, but as Overclock Labs co-founders Greg Osuri (CEO) and Greg Gopman (COO) told me, in their view, the goal of “serverless” is about being fully automated (the company’s third co-founder is Adam Bozanich). Lambda does this at the real-time level for event-driven applications but DISCO, which is going to be open source, aims to offer support for a wider range of applications.

The general idea here, Osuri told me, is to build a platform that allows you to work with any cloud service provider and allows you to easily move between clouds. The developer experience, he said, should be somewhat like using Heroku and the team is building both a graphical interface as well as a smart command-line tool for the service.

For now, the tool supports AWS, the Google Cloud Platform and bare-metal specialist Packet, but the team tells me that it expects to support other clouds in the near future. Because DISCO is open source, others can also easily build their own integrations, too.

To deploy applications with DISCO, developers can choose two routes: if they are building according to the 12-factor app philosophy, then DISCO can simply take the source code and deploy the app for them or they can simply build their own containers and hand them over to DISCO for deployment. The system then handles the container registry and manages the containers for them.

[embedded content]

The promise of DISCO is that it will make deploying applications as easy as using a service like Heroku, but for maybe a third of the cost. Osuri and Gopman previously built AngelHack together and have extensive experience in building open source tools and working inside open source ecosystems. It’s no surprise then that they plan to open source the DISCO tool and then build premium services on top of that.

What exactly these premium services will look like remains to be seen, but the first order of business for the company is now to release DISCO within the next few months and then build an ecosystem around it.

That’s easier said than done, especially in an age where it can feel like a new high-profile open source project launches daily, but the founders are quite realistic about what the process will look like. They also admitted that they don’t expect to be the only players in this space — with Kubernetes, the basic building blocks for this automated infrastructure are available to anybody, after all (and with AWS re:Invent around the corner, I’m sure we’ll hear from other competitors in the next week or two). They do, however, think that they are launching early enough to become on of the bigger players.

Featured Image: Thatree Thitivongvaroon/Getty Images