It’s time again for Microsoft Build 2018 keynote on day 1 here in somewhat sunny Seattle. I blogged about some thoughts and predictions on what to expect from the event last week, and now it’s time to dive deeper into the announcements, releases and what those might mean for developers and IT Pros. I’ll follow the same format I did last year, where I’ll blog live while the keynote happens and gather all the interesting resources at the bottom of this blog post. And the same for day 2 keynote that will be held on Tuesday.
We start with a small welcome from Charlotte Yarkoni, CVP for Growth + Ecosystems at Microsoft. She runs through what we’re about to hear during this keynote. There’s a promise of more customer presentations than ever before, which I think is generally a great thing – as long as they align well with the announcements we are about to get during the week.
Azure for Students gives free access to the cloud for students. Microsoft for Startups was also mentioned. This essentially replaced the BizSpark programs, and provides funding through VC’s to startups working in the Microsoft space.
Once again we’re being reminded there’s never been a better time to work in technology. I tend to agree, but then again, I’ve felt like this every year since 1991 when I started working full time in IT.
Satya Nadella next on stage, as expected. He seems relaxed and I have to admit, I do like his style. It seems so effortless, laid back but still very professional.
Last year Satya walked us through the Intelligent Edge, We return to this topic briefly through opportunities and responsibilities. The world is becoming a computer, by being embedded in everything – every person, every thing and every place. We move on to privacy and naturally GDPR gets a mention. “Customers are in control”, and “privacy is preserved.”
Ethical AI next. Essentially, what computers should do, instead of what they can do. Satya also combines the need for privacy within AI. As is customary, Microsoft is again looking bit further into the future, while many companies are still figuring out how to move to the cloud, or what AI actually means for their businesses. Build 2018 is about creating the opportunities – with Microsoft Azure and Microsoft 365. It’s interesting that Office 365 is not something that gets a mention, as the marketing message shifted to Microsoft 365 sometimes this year.
We start with Azure, including Azure Stack, Azure IoT Edge and Azure Sphere. Azure Stack has been available since last year, and it’s essentially “cloud in a box”, with hybrid capabilities to reach out to the public Azure cloud services. Azure IoT Edge also announced previously, is the solution for connecting Linux and Windows based IoT devices with Azure with good integration and easiness.
First announcement – Azure IoT Edge Runtime is being open sourced! There’s also a new partnership with Qualcomm, in the form of AI Developer Kit and a IoT camera for industrial use. A partnership with DJI, the drone maker, providing a Windows SDK and commercial drone solution. This seems like not the cheap DJI Phantom drones but the more commercial ones available from DJI.
Next up, Azure Sphere, announced earlier this year at the RSA Conference. Sam George, Director for Azure IoT on stage to demo the drones! Sam is showing a customer case using AI and IoT combined using Azure IoT Edge. The solution runs locally in the camera, thus data is not needed to be sent first to the cloud to process it. The partnership with DJI makes AI available for the drones, such as using Machine Learning to recognize content from a live camera feed from a drone.
They demo this using a DJI Mavic Air on stage. Very impressive, the drone detects an anomaly in the live feed using some logic from the laptop it’s connected to wirelessly.
Satya returns on stage, and we move to AI. It’s available for every developer through Azure – “that’s the next shift.” Microsoft has 35+ Cognitive Services in Azure, the most from any cloud platform. A few announcements around this are posted here. Essentially, this includes a unified speech service, new vision services and the Bot Builder SDK v4. Bot Framework gets 100+ new features, as part of the SDK refresh. What’s also interesting is Machine Learning support for .NET Framework, with ML.NET (very early preview). With this developers can easily employ Azure Cognitive Services in any .NET project.
Project Kinect, from Alex Kipman’s team is also announced. It’s an Azure AI-enabled device for Edge devices. See the announcement here.
QnA Maker is also out from preview.
Many more details on the AI announcements are also in this blog post from Microsoft.
Project Brainwave announced. Cool name. It’s a deep neural network processing on dedicated hardware using Intel FPGA’s.
Moving from AI to Microsoft 365. I think we’re getting new devices for the holiday season, but not today. Harman Kardon Invoke, I think, is still the only real device employing Cortana today. Joe Belfiore is running keynote for day 2, so I expect many new innovations and announcements from tomorrow around Windows 10. This probably includes highlights from the Windows 10 April 2018 Update (1803, or Redstone 4) such as Sets and Timeline (although Sets, I recall, was pushd to RS5 for later this year).
Cortana + Amazon Alexa interoperability is being worked on. We get a demo on two offices, with Amazon Alexa device on one and a laptop running Cortana in other. The Alexa device actually runs both Alexa and Cortana, simply by saying “Alexa, open Cortana“. This might get confusing, but the multi-agent approach is very promising. This is also something Microsoft probably needed, to kind of piggyback on the Alexa success and the ubiquitous Amazon devices found in many households today. You can subscribe today for the early preview on the interop model.
Microsoft Graph next. Hololens with Microsoft 365 is being discussed next. In essence, Mixed Reality. Remote Assist and Microsoft Layout are two new MR apps, available for public preview in late May. Remote Assist is a collaboration tool for solving problems faster. Microsoft Layout is a design tool for spaces, like physical spaces. I feel we’re mixing MR (which essentially is VR with additional data and devices) and Hololens (which essentially is see-through with holograms) and I think the new apps are being rolled for Hololens, but not for the current MR devices (that are tethered).
Meetings and Cortana being demoed using the Harman Kardon Invoke device. A new prototype device being showed, from quite afar that understands audio and video, and does realtime translation. People entering a meeting room are recognized and greeted by the device. Realtime transcripts from the meeting is quite amazing! It also picks up plans and follow-ups and presumably then keeps pestering you to actually work on the items you promised to deliver. This demo essentially tied together Hololens, Surface Hub, Teams and the new device. Very nice, but I do not dare to look at the price tags for each of these devices just now.
Satya back on stage. Moving to Microsoft Gaming and Dynamics 365. Gaming news are reserved for E3 conference in June so not expecting much Xbox-related news today. If I was only working with on-premises solutions today I would ask myself at this point, “is there anything left in on-prem for me anymore?” and the answers would be “very little, time to move on.”
Closing the keynote now, Satya announces AI for Accessibility.
Moving on to part 2 of the keynote. Stretching. That did good, although it’s a bit cramped.
Scott Guthrie on stage. Red shirt, of course. Focus on Azure now, including Productive + Hybrid + Intelligent + Trusted. Over 90% of Fortune 500 companies use Microsoft Cloud, or more specifically, running some portion of their business in Azure.
Visual Studio + Azure, many new announcements in this blog post. We first get a highlight of Visual Studio Live Share, where developers can share coding in realtime with eachother. Sounds like Netflix is out, and coding and chill is in! You can get the extension for Visual Studio and Visual Studio Code here. This is pair programming in 2018. I also spotted Visual Studio Intellicode, that provides more intelligent suggestions to code using AI. You can sign up for a future preview here. Also see the FAQ here.
The future of Windows Desktop Development is being described in deep detail here. In essence, the future is .NET Core 3.
Microsoft is the largest single corporate contributor to open source on GitHub. Microsoft is announcing deeper integration between GitHub and Visual Studio App Center. This is the tool for iOS and Android app development, and continuous everything. App Center allows connecting to GitHub, recognizing your repos and the project details and pulls the repo for building and testing. Looks very simple and quick.
Donovan Brown to rub some DevOps on it next. We start with a VSTS demo. A new feature in Azure called DevOps Projects, which provides a nice wizard for providing parameters to your project such as the programming language and where to deploy code. Gone are the days of FTPing code or right-click > Publishing your code. Everything goes to VSTS, and the pre-defined views tie everything together.
Azure Kubernetes Service (AKS) gets a brief mention. Note that it used to be called Azure Container Service (AKS), but apparently someone resolved the naming issues and legalities. Announcement for Dev Spaces for AKS, which provides inner-loop development with containers. Editing and debugging code in an instant, and debugging across microservices.
Scott Hanselman next on stage, doing a demo on the new serverless capabilities. He seems a bit off, not sure why. Maybe not the usual high energy presentation. It could also be that since Donovan Brown did a truly fast-paced and impressive demo, that Scott’s session feels not in line. Some authentic comedy in there too, when apparently the microphone battery ran out.
Back to Scott Guthrie, and serverless. Azure Functions and Logic Apps, for starters. Add in Eventgrid, and I think we’re next getting Cloudevents. Microsoft announced support for this last week, see the Github repo for some details on the specification.
IoT – 20 billion connected devices by 2020, is the prediction. Azure IoT Hub is the management layer for IoT devices – it basically supports any IoT device available today. We get a demo on building an app using IoT Hub from scratch. The usual “click a button and tweet something”, but still impressive to do this in less than 5 minutes. Back to Azure IoT Edge again, not sure why as we more or less spent several minutes on it with Satya during the first part of the keynote.
We get a demo on using Azure ML models with IoT Edge devices, a Raspberry Pi essentially. Impressive in action!
Databases next. SQL DB, PostgreSQL, MySQL, Redis Cache and Cosmos DB. Cosmos DB was announced a year ago at Build, and it builds upon the DocumentDB. Cosmos DB gets multi-master write support.
Last but not least, AI, again. A quick lap around Cognitive Servives (again). One new announcement here is integration of Azure Search with Cognitive Services, to provide enrichment of content using AI models. Someone from NBA delivered a talk at this point. It was a bit dull, and as I know nothing about NBA, there was little data to share here that is of interest to anyone. A basic 101 demo on using Cognitive Services also.
Scott Guthrie back on stage, to recap how to build your own AI models. Step 1 is preparing data with Azure Databricks. Step 2 is building and training the model, using Machine Learning. Step 3, the final step, is deploying it for your apps. This could be through AKS, Azure Batch, IoT Edge or something similar.
Jeff Wile from Starbucks on stage. Not much to share on this either, very highlevel on how they use technology to do scheduling, supply chain management, inventory and agronomy. They leverage cloud platforms “like Microsoft Azure.” It seems mandatory that IoT Hub has to be mentioned during each demo today.
Scott Guthrie is back. And then a demo from Paige using Azure Databricks to repeat all 3 steps ScottGu outlined earlier.
And that’s it for the keynote for day 1! See the announcements made during the keynote below.
Announcements during the keynote
- Azure IoT Edge Runtime is being open sourced, check out the repo here
- Windows SDK and commercial drone solutions with DJI – the beta SDK is available here
- Machine Learning for .NET (ML.NET), see the repo and docs here
- Project Kinect for Azure, a hardware device for IoT Edge, see further announcement here
- QnA Maker is out from preview and is now generally available
- Project Brainwave, a dedicated hardware device for deep neural network processing. See details here
- Remote Assist and Microsot Layout, two new MR apps available in late May
- AI for Accessibility announced
- Visual Studio Live Share announced, get the extension here
- Visual Studio Intellicode, that uses AI to provide intelligent suggestions to your code. Preview coming sometime in the future, sign up here
- Dev Spaces for AKS announced, no further details yet, general AKS announcements listed here
- Multi-master write support for Cosmos DB
- Azure Search support for Cognitive Services, for enriching content using AI
And that’s a wrap!
Mixed feelings about today’s keynote. On one hand, it was fluent and mostly what I was expecting. On other hand, there wasn’t much there in the end. Certain announcements but nothing groundbreaking or earthshaking. Great things, but it was a bit here and there – some bits for AI, some bits for Microsoft 365 and at times it felt like an infomercial – “let me show you Teams next.”
Diversity and accessibility were noticeable, and Microsoft really feels committed to these themes.
The second part of the keynote with Scott Guthrie, I felt, was more captivating. A lot of talk around containers and IoT, as well as AI. Cosmos DB got a lot of visibility also, but it makes sense as that is really something every developer should know and master.
Tomorrow will be more about Microsoft 365, and technology, and a little less about educating us about Microsoft’s current offerings.
Stay tuned for my recap tomorrow!