Azure Announces Azure Stack HCI
Arpan Shah, the General Manager of Microsoft Azure, recently announced in a blog post on the company’s website that Azure Stack HCI solutions are now available for customers who want to run virtualized applications on modern hyperconverged infrastructure, or HCI, to lower costs and improve performance.
Azure Stack HCI allows users to deploy cloud instances locally and then migrate them to Azure or Azure Stack with no changes or special considerations. It will feature the same software-defined compute, storage, and networking software as Azure Stack; however, it will not replace Azure Stack nor be upgraded to Azure Stack.
This move by Microsoft is intended to better compete with the other CSP’s private cloud offerings. Microsoft foresees the first cluster of Azure Stack enterprise customers will already be Azure public cloud customers.
Bausch + Lomb Attributes Success to IBM Partnership
Ron Cummings Kralik, Principal Network Engineer and IoT Architect in the Surgical business unit at Bausch + Lomb, stated in an IBM blog post that the Canadian eye health products company felt a responsibility to identify a new and different approach to directly address the challenges that today’s ophthalmologists face, leading them to partner with IBM.
Using IBM Cloud’s Watson IoT services allowed Bausch + Lomb to develop a new solution, leveraging cloud technology called eyeTELLIGENCE applications, which will be exclusively available on its Stellaris Elite vision enhancement system, the company’s next-generation surgical platform for cataract and retina surgery. Using eyeTELLIGENCE allows ophthalmologists to gather information about the system and make service requests or report technical issues, which are then sent via the IBM Cloud to the appropriate team at Bausch + Lomb.
Previously, when in need of technical support, a doctor or his staff would have to contact a sales representative or someone from Bausch + Lomb might have needed to physically visit the doctor’s office to address the issue. IBM believes that eyeTELLIGENCE applications will help reinvent the way operating rooms function today.
Oracle Employees Face Layoffs
Oracle has begun a series of layoffs at several of its offices in California. The company plans to cut 352 people on May 21, according to a notice filed last week with the State of California. Staff reductions include 255 people in Redwood City and 97 people in Santa Clara, both in California. Those laid off include people who worked in Oracle’s infrastructure cloud units that were meant to help spur growth, according to news reports.
The software maker has recently struggled to increase revenue amid a transition to Internet-based programs and services. The company has also begun layoffs of unknown amounts at other global facilities. One anonymous poster on ‘thelayoff.com’, a website hosting discussion boards for people affected by layoffs, said the total target is 10 percent of Oracle’s global headcount.
According to Oracle spokeswoman Deborah Hellinger, “As our cloud business grows, we will continually balance our resources and restructure our development group to help ensure we have the right people delivering the best cloud products to our customers around the world,” This isn’t the first instance of massive layoffs for Oracle. In 2017, the company dropped nearly 1000 people from their staff.
AWS Reveals General Availability for Inferentia
AWS has finally announced that general availability for Inferentia is expected by the end of 2019. The announcement was delivered at the cloud company’s AWS Santa Clara Summit. Inferentia is a machine learning chip designed to deliver high performance at relatively low costs. Inferentia will support the TensorFlow, Apache MXNet, and PyTorch deep learning frameworks, as well as models that use the ONNX format.
AWS announced Inferentia at its re:Invent conference last fall, making it close to a full year between announcing the chip and its actual deployment. Since the chips are designed and manufactured for in-house use only, this length of time seems slightly prolonged.
AWS’s Inferentia deep learning inferencing processor comes on the heels of Google’s TensorFlow Processing Unit for both training and inference apps. Liftr Cloud Insights predicts other CSP’s will design their own chips for deep learning in the future.
That’s a wrap for this week’s Liftr Cloud Look Ahead. Has your business made major strides using cloud? We want to hear from you! Email us at email@example.com
See you next week!