Breaking

Showing posts with label Microsoft Azure. Show all posts
Showing posts with label Microsoft Azure. Show all posts

Friday, January 21, 2022

1/21/2022 11:39:00 AM

Azure Update: Securing Azure Kubernetes networking with Calico

Azure Update: Securing Azure Kubernetes networking with Calico

Azure Update: Securing Azure Kubernetes networking with Calico


With a many lines of YAML, Calico will keep watch as you make operation- controlled networking.

One of the intriguing aspects of moving to a top-down, operation-centric way of working is redefining how we do networking. Important as the operation model first abstracted down physical structure with virtualization and is now using Kubernetes and analogous unity tools to epitome down the underpinning virtual machines, networking is moving down from general- purpose routed protocol heaps to software- driven networking that uses common protocols to apply operation-specific network functions.

We can see how networking is evolving with Windows Garçon 2022’s preface of SMB over QUIC as an volition to general- purpose VPNs for train sharing between on- demesne Azure Mound systems and the Azure public pall. Also, in Kubernetes, we ’re seeing technologies similar as service mesh give an operation- defined networking model that delivers network morass with your distributed operation as part of the operation description rather than as a network that an operation uses.

A new networking subcaste operation- defined networking

This operation- driven networking is a logical extension of important of the software- defined networking model that underpins the public pall. Still, rather of taking deep understanding of networking and, more importantly, network tackle, it’s a shift to a advanced- position approach where a network is automatically stationed using the intents in policy and rules. The shift down from both the virtual and the physical is essential when we ’re working with stoutly tone-orchestrating operations that gauge up and down on demand, with cases across multiple regions and topographies all part of the same operation.

It’s still early days for operation- driven networking, but we ’re seeing tools appear in Azure as part of its Kubernetes perpetration. One option is the Open Service Mesh, of course, but there’s another set of tools that helps manage the network security of our Kubernetes operations Network Policy. This helps manage connectivity between the colorful factors of a Kubernetes operation, handling business inflow between capsules.


Network programs in Azure Kubernetes Service
AKS (Azure Kubernetes Service) offers network policy support through two routes its own native tool or the community- developed Calico. This alternate option is maybe the most intriguing, as it gives you across-cloud tool that can work not only with AKS, but also with your own on- demesne Kubernetes, Red Hat’s Open Shift, and numerous other Kubernetes executions.

Calico is managed by Kubernetes security and operation company Tigera. It's an open source perpetration of the Kubernetes network policy specification, handling connectivity between workloads and administering security programs on those connections, adding its own extensions to the base Kubernetes functions. It’s designed to work using different data aeroplanes, from eBPF on Linux to Windows Host Networking. This approach makes it ideal for Azure, which offers Kubernetes support for both Linux and Windows holders.
Setting up network policy in AKS is important. By dereliction, all capsules can shoot data anywhere. Although this is n’t innately insecure, it does open up your cluster to the possibility of concession. Capsules containing back- end services are open to the outside world, allowing anyone to pierce your services. Enforcing a network policy allows you to insure that those back- end services are only accessible by frontal- end systems, reducing threat by controlling business.


Whether using the native service or Calico, AKS network programs are YAML documents that define the rules used to route business between capsules. You can make those programs part of the overall overload for your operation, defining your network with your operation description. This allows the network to gauge with the operation, adding or removing capsules as AKS responds to changes in cargo (or if you ’re using it with KEDA (Kubernetes- grounded Event- Driven Autoscaling), as your operation responds to events).

Using Calico in Azure Kubernetes Service

Choosing a network policy tool must be done at cluster creation; you ca n’t change the tool you ’re using once it’s been stationed. There are differences between the AKS native perpetration and its Calico support. Both apply the Kubernetes specification, and both run on Linux AKS clusters, but only Calico has support for Windows holders. It’s important to note that although Calico will work in AKS, there’s no sanctioned Azure support for Calico beyond the being community options.

Getting started with Calico in AKS is fairly simple. First, produce an AKS cluster and add the Azure Container Networking draw-in to your cluster. This can host either AKS network policy or Calico. Next, set up your virtual network with any subnets you plan to use. Once you have this in place, all you need to do is use the Azure command line to produce an AKS cluster, setting your network policy to “ calico” rather than “ azure.” This enables Calico support on both Linux and Windows not pools.However, make sure to register Calico support using the EnableAKSWindowsCalico point flag from the Azure CLI, If you’re using Windows.

The Calico platoon recommends installing the calicoctl operation tool in your cluster. There are several different options for installation running binaries under Windows or Linux or adding a Kubernetes cover to your cluster. This last option is presumably stylish for working with AKS as you can also mix and match Windows and Linux capsules in your cluster and manage both from the same Kubernetes terrain.

Structure and planting Calico network programs

You’ll produce Calico network programs using YAML, setting programs for capsules with specific places. These places are applied as cover markers when creating the cover, and your rules will need a chooser to attach your policy to the capsules that meet your app and part markers. Once you’ve created a policy, use kubectl to apply it to your cluster.

Rules are easy enough to define. You can set doorway programs for specific capsules to, say, only admit business from another set of capsules that match another chooser pattern. This way you can ensure your operation back end, say, only receives business from your frontal end, and that your data service only works when addressed by your aft end. The performing simple set of doorway rules ensures insulation between operation categories as part of your operation description. Other options allow you to define rules for namespaces as well as places, icing separation between the product and test capsules.

Calico gives you fine-granulated control over your operation network policy. You can manage anchorages, specific operation endpoints, protocols, and indeed IP performances. Your programs can be applied to a specific namespace or encyclopedically across your Kubernetes case. Rules are set for doorway and exit, allowing you to control the inflow of business in and out of your capsules, with programs denying all business piecemeal from what's specifically allowed. With Calico, there’s enough inflexibility to snappily make complex network security models with a sprinkle of simple YAML lines. Just produce the YAML you need and use calicoctl to apply your rules.


Operation-driven networking is an important concept that allows operation development brigades to control how their law interacts with the underpinning network fabric. Like storehouse and — thanks to tools like Kubernetes — cipher, the capability to treat networking as a fabric that can be simply controlled at a connection position is important. Networking brigades no longer have to configure operation networks; all they need to do is help define VNets and also leave the operation programs up to the operation.

Still, in ultramodern operations, we need to take advantage of tools similar to Calico, If we’re to make flexible. It may be a change in how we suppose about networks, but it’s an essential one to support ultramodern operation architectures.



Source

Tuesday, November 20, 2018

11/20/2018 04:34:00 PM

Microsoft Azure, Office 365 Users Hit by Multi-factor Authentication Issue

Unable to sign into your Microsoft Office 365, Azure Active Directory and other services? A global Multi-Factor Authentication (MFA) issue may be the reason.


A number of Microsoft Azure and Office 365 users have been unable to get into their accounts for most of the day on November 19. The problem: A multi-factor authentication issue which hit users worldwide and left them unable to sign into their services.

The Office 365 status page noted that affected users may be unable to sign in using multi-factor authentication (two-factor authentication) and may also be unable to do self-service password resets.

"A subset of users are no longer receiving prompts on their mobile devices (SMS, call or push) and (we) are investigating diagnostic logs to understand why," the status dashboard noted.

Azure and Office 365 MFA services were affected starting at 4:39 am UTC, which was 4:39 a.m. ET, according to the Office status page.

The Azure status page noted that those affected may have had problems signing into Azure resources like Azure Active Directory when MFA was required.

Azure engineers reported around noon ET today that they had deployed a hotfix, but that it took time to take effect across impacted reasons and in particular Europe and Asia-Pacific. Engineers were reporting they have seen a reduction in authentication errors as a result of the hotfix. The hotfix also tipped off engineers that some subset of customers were not receiving SMS, Call or Push prompts for MFA.

"Engineers are continuing to explore additional workstreams and potential impact on customers in other Azure regions to fully mitigate this issue," the Azure status page noted.

Beyond those status reports, Microsoft hasn't shared information as to what caused the MFA services problem today.

Update (4:30 pm ET): I'm hearing from more users that the MFA issue seems to be mostly over. The Azure status page now says all services are running normally. The Office 365 status page says:

"We've observed continued success with recent MFA requests and are taking additional actions in the environment in an effort to prevent new instances of this problem. Our investigation into the root cause of this problem is ongoing and maintained as our highest priority."


SOURCE:

Wednesday, November 7, 2018

11/07/2018 07:25:00 PM

Walmart is bringing 'thousands' of internal business apps to Microsoft's Azure

Not all Microsoft 'partners' are created equal. These days, as in the case of Microsoft's latest 'partnership' announcement with Walmart, they are actually customers.


Microsoft and Walmart announced a "strategic partnership" back in July. At that time, Walmart committed to using Azure, Microsoft 365 and Microsoft AI and Internet of Things (IoT) tools and technologies to modernize its retail operations.

On November 5, Microsoft and Walmart officials said they were adding "an extension" to their original five-year deal. That extension involves Walmart expanding its existing Innovation Hub in Austin, Texas. The so-called Walmart "cloud factory," slated to open in 2019, will be staffed by 30 technologists, including engineers from both Walmart and Microsoft, according to a Microsoft blog post about the deal.

As part of this expansion, Walmart is planning to move thousands of its internal apps to Azure, plus build some new cloud-native applications. Walmart officials say they are going to use Microsoft's Cognitive Services, machine learning and chatbot technologies.

Microsoft's Walmart win is worth noting for a few reasons.

First, it's an example of the type of customer that Microsoft is targeting by positioning itself as an alternative to Amazon -- which is both a cloud vendor and a competitor to other brick-and-mortar retailers.

The Walmart deal also gives me an opportunity to talk about Microsoft's redefinition of the word "partner."

Until a couple of years ago, when Microsoft officials talked about partners, they meant either reseller/integrator partners or they meant OEM/ISV partners. More recently, Microsoft uses the partner/partnership terms a lot more loosely. Some Microsoft customers are now "partners," too.

Microsoft's justification in calling customers partners seems to be that many times -- as in Walmart's case -- Microsoft engineers and managers end up working side-by-side with the company's customers' engineering and management teams. This actually isn't new; Microsoft has been embedding its own engineers and support people inside key customers' shops for years, maybe decades.

But Microsoft officials maintain that things are different this time.

When Microsoft reorganized its salesforce a year ago and cut multiple thousands of jobs in the process, it also subsequently hired "more than 3,000 developers into the salesforce," according to Microsoft Executive Vice President of its Worldwide Commercial Business Judson Althoff. As a result, "we can actually code with our customers and help them really digitize everything they do within their business," Altoff said during the Citi Global Technology Conference in September.

As part of that reorg, Microsoft also built a "customer success organization," Altoff reminded attendees, which is basically a "non-billable consulting" organization, he explained.

"These are people that live with our customers beyond the deal to make sure that our cloud services actually get infused into business processes, that you don't have the cloud equivalent of shelfware in the software world. It's also been huge for us because the adoption and the actual utilization of the services have increased dramatically over the last year," he said.

(This is all part of Microsoft's big change in compensating salespeople for software/services sold vs. software/services that are actually consumed.)

Microsoft is going so far as to categorize some big customers officially as "digital partners," Altoff noted. At the Citi conference, he cited Boeing as an example, saying "we've actually taken their digital aviation assets, put them on our Azure platform, and we co-sell with them to other airlines around the world."

So for those trying to keep tabs at home, not all Microsoft "partners" are created equal. Some (many?) are actually customers, not traditional channel partners.


SOURCE:

Tuesday, September 4, 2018

9/04/2018 07:35:00 PM

Microsoft Azure now supports NVIDIA GPU Cloud for AI, HPC workloads

Microsoft and NVIDIA are targeting data scientists, developers and researchers with preconfigured containers with GPU-accelerated software for running their AI and high-performance computing tasks.


Microsoft has added a new level of support for NVIDIA GPU projects to Azure which may benefit those running deep-learning and other high-performance computing (HPC) workloads. The pair are touting the availability of pre-configured containers with GPU-accelerated software as helping data scientists, developers and researchers circumvent a number of integration and testing steps before running their HPC tasks.

Customers have a choice of 35 GPU-accelerated containers for deep learning software, HPC applications, HPC visualization tools and more, which can run on the following Microsoft Azure instance types with NVIDIA GPUs:

  1. NCv3 (1, 2 or 4 NVIDIA Tesla V100 GPUs)
  2. NCv2 (1, 2 or 4 NVIDIA Tesla P100 GPUs)
  3. ND (1, 2 or 4 NVIDIA Tesla P40 GPUs)

As NVIDIA noted, these same NVIDIA GPU Cloud (NGC) containers work across Azure instance types, even with different types or quantities of GPUs. There's a pre-configured Azure virtual machine image with everything needed to run NGC containers in the Microsoft Azure Marketplace.

Microsoft also made generally available today "Azure CycleCloud," which officials described as "a tool for creating, managing, operating and optimizing HPC clusters of any scale in Azure."



Monday, July 16, 2018

7/16/2018 09:16:00 PM

Snowflake's cloud data warehouse comes to Microsoft Azure

In a homecoming of sorts, cloud data warehouse pure-play Snowflake's product is no longer an AWS exclusive

Cloud data warehouse player Snowflake Computing is today announcing the availability of its platform on Microsoft's Azure cloud. Heretofore available only on Amazon Web Services (AWS), the company now offers the first major multi-cloud data warehouse.

That Snowflake started off on AWS is somewhat ironic, given the company's Microsoft DNA. For example, Snowflake CEO Bob Muglia once led the Server and Tools Business (the precursor to today's Cloud and Enterprise division) at Microsoft. And Soma Somasegar, Managing Director at Madrona Venture Group, which participated in Snowflake's $100M Series D funding round last year, led Microsoft's developer division for many years.

But this isn't about nostalgia or sentimentality. Snowflake's VP of Product, Christian Kleinerman, told me that the demand the company had for its product to run on Azure was "absolutely phenomenal." The fact that many companies have multi-cloud strategies and that several others are natural competitors to Amazon (and therefore AWS-averse), underlies that demand.

On the Amazon side, Snowflake competes with Amazon's own data warehouse service, Redshift. On the Microsoft side, Snowflake will essentially compete with Microsoft's Azure SQL Data Warehouse (SQL DW).

But Snowflake will likely drive a lot of utilization of Azure compute and storage resources, which in fact ought to make for an enthusiastic partner in Microsoft. And that checks out: Kleinerman, who himself is a Microsoft alumnus, having held a leadership position on the product that was SQL DW's on-premises precursor, told me that the partnership has been excellent and that the Snowflake team got full and immediate support from Redmond whenever it hit a speed bump in its integration work.

Kleinerman also told me that Snowflake's Snowpipe service, which can ingest streaming data that lands in Amazon's Simple Storage Service (S3), will do likewise for data that lands in Azure Blob storage. The company will also charge for storage the same price that customers would pay were they to provision Azure storage on their own. This mirrors the company's policy of providing pass-through pricing on S3 when it runs on AWS.

This partnership, which was heavily rumored and lightly acknowledged for months, is here, with Snowflake now available on Azure for preview. That makes for yet one more option for customers implementing analytics in the cloud. And while in the immediate term, it may add a bit of confusion, in the end it also adds choice, and robust competition between vendors, all of which is good for the customer.




Sunday, July 1, 2018

7/01/2018 07:36:00 PM

Microsoft's Azure IoT Edge, now generally available, is key to Redmond's IoT strategy

Microsoft's service for bringing computing and artificial intelligence processing to IoT devices, Azure IoT Edge, is rolling out globally.


Microsoft's Azure IoT Edge service is generally available (GA) globally, as of today, June 27.

Azure IoT Edge is a service which allows users to deploy and run Azure services, AI and custom logic on IoT devices. Users can containerize Azure Cognitive Services, Machine Learning, Stream Analytics and Functions so they can run them on all kinds of devices, ranging from Raspberry Pis to industrial equipment. This is what Microsoft means when it talks about processing data "at the edge."

For a couple of years, Microsoft has been refocusing its mission around tools and services for the "intelligent cloud and intelligent edge." Microsoft defines the "edge" broadly as where users interact with the cloud. Edge devices can be anything from virtual-reality/mixed-reality headsets to drones, to on-premises PCs and servers.

Simultaneous with the GA announcement, Microsoft is expanding its Azure Certified for IoT program, adding new categories like device management and security to its catalog. Pre-built Edge modules also are available via the Azure Marketplace.

Microsoft released a preview of Azure IoT Edge last year at its Connect(); conference in November. At its Build 2018 developers conference earlier this year, Microsoft announced it was open-sourcing the IoT Edge runtime, making it available on GitHub.

Using Azure IoT Edge, users can scale deployments across millions of devices using Microsoft's Automatic Device Management service, officials said. Software development kits are available for C, C#, Node.js, Python, and Java. VSTS support is built-in and development tooling is available for the service in Microsoft's lightweight VS Code tool.

"People are choosing Azure as a cloud because of it (Iot Edge)," said Arjmand Samuel, Principal Program Manager of Azure IoT.

There are three components required for Azure IoT Edge deployment: The Azure IoT Edge Runtime, Azure IoT Hub, and edge modules. The Azure IoT Edge runtime is free, but users will need an Azure IoT Hub instance for edge device management and deployment if they are not using one for their IoT solution already, officials said. Go here for more Azure IoT Edge pricing information.



Thursday, March 8, 2018

3/08/2018 09:38:00 PM

Axon moves 20 PB of data, Evidence.com to Microsoft Azure

Axon reported strong fourth quarter results and detailed a December migration to Microsoft Azure as well as other cloud-centric implementations.



Axon, formerly known as Taser, has moved 20 petabytes of data to Microsoft Azure in what the company billed as one of the largest cloud migrations.

The company, which makes law enforcement devices, a body camera system and evidence collection cloud platform called Evidence.com, disclosed the move to Microsoft Azure with strong fourth quarter results.

As previously reported, Axon last year launched a digital transformation effort and a pivot to software-as-a-service and sensors. Axon had outlined a decision to move to Azure a little more than two years ago.

In early December, Axon completed the U.S. migration of Evidence.com to Azure. The company said the move will enable it to win larger deals with customers and be more nimble. Axon said that Azure helped it "win several major city police agencies in the U.S. and one large international customer."

What is cloud computing? Everything you need to know about the cloud, explained | How to choose your cloud provider: AWS, Google or Microsoft? | Top cloud providers 2018: How AWS, Microsoft, Google Cloud Platform, IBM Cloud, Oracle, Alibaba stack up

Axon had $700,000 in duplicate storage and migration costs in the fourth quarter, under the $1 million expected. In the third quarter, those migration and storage costs were $1.4 million.

And Axon's cloud work isn't done. CFO Jawad Ahsan said the company is moving to a system to automate revenue recognition, deploying a new HR system and migrating to cloud enterprise resource planning.

Tech Pro Research: Data classification policy | Quick glossary: Storage | Cloud migration decision tool

For Axon, the IT system revamp was necessary so it can swap out its business model to recurring revenue and a halo of devices, software, and technology for law enforcement. Axon also recently launched an artificial intelligence group and outlined a series of product enhancements for evidence submission.

"Axon is at a new juncture. We have a platform of innovated and interconnected business that will allow us to continue creating a dominant new market," said Axon CEO Rick Smith, who is moving to a 100 percent performance-based compensation plan.


Luke Larson, Axon's president, noted that Evidence.com can now search for videos from body cameras faster as well as absorb CCTV footage and other forms of video and audio evidence. That level of unstructured data demanded storage that could scale.

Axon's fourth-quarter results are showing traction. The company reported a fourth-quarter net loss of $2.1 million, or 4 cents a share, on revenue of $94.7 million, up 15 percent from a year ago. Non-GAAP earnings were 13 cents a share. Excluding stock-based compensation, Axon reported earnings of 18 cents a share.


For 2017, Axon reported revenue of $344 million, up 28 percent from a year ago, and net income of $5.21 million.

As for the outlook, Axon said it will deliver revenue growth of 16 percent to 18 percent.

For now, the bulk of Axon's fourth-quarter revenue comes from Taser weapons, but the software and sensors unit is accounting for more of sales. Taser weapons delivered $64.4 million in revenue for the fourth quarter with software and sensors at $30.2 million.




Sunday, August 20, 2017

8/20/2017 10:51:00 PM

CoreOS stretches out Kubernetes to Microsoft Azure

CoreOS's Kubernetes distro, Tectonic 1.7, conveys on half breed cloud by broadening compartment DevOps abilities crosswise over open-source and Azure mists and server farms.





CoreOS, the maker of Tectonic, its endeavor prepared Kubernetes holder DevOps stage, discharged Tectonic 1.7. Be that as it may, more than that, Tectonic is currently accessible on the Microsoft Azure cloud. This empowers ventures to utilize a solitary, predictable Kubernetes stage on half and half cloud situations. 

Structural contains Kubernetes 1.7's most current elements. CoreOS dependably utilizes the most recent rendition of Kubernetes. As one of Kubernetes' driving designers, CoreOS is in superb position to remain in a state of harmony with the open-source extend. Structural's new elements include: 

A single tick overhaul of unadulterated upstream Kubernetes: With the capacity to update Kubernetes from 1.6.7 to 1.7.1 with no downtime, Tectonic is the main undertaking prepared stage empowering mechanized operations. 

Checking alarms: Tectonic now empowers pre-designed cautions through the open-source Prometheus extend. This empowers clients to effortlessly screen their Kubernetes groups by designing their favored notice channels. This discharge likewise presents one of a kind cautions around moving updates for Deployments and DaemonSets. These instruments enable application proprietors to pick up perceivability into sending advancement and scale out. 

Accessible on Azure: Tectonic enables clients to safely and effectively send Kubernetes workloads on Azure. 

Crossover preparation: Tectonic gives Kubernetes crosswise over multi-cloud stages with accessibility on Amazon Web Services (AWS), Azure, and uncovered metal. 

System arrangement bolster: With organize strategy now upheld in alpha and fueled by Project Calico, Tectonic empowers better security and control of inbound movement to your units. 

CoreOS cases to be the principal merchant to give undertakings easy Kubernetes programmed programming redesigns. Structural clients can molecularly refresh Kubernetes forms in a single tick. Rather than investing hours of energy refreshing Kubernetes physically, Tectonic's approach empowers organizations to spare time and concentrate on income creating ventures. 

"This real arrival of Tectonic and stable discharge on Microsoft Azure is a critical stride to convey on the guarantee of multi-cloud, making foundation and operations more proficient and versatile," said Rob Szumski, item supervisor, Tectonic, at CoreOS in an announcement. "Structural on Azure spares you time and cash by building your Kubernetes framework accurately from the earliest starting point and accelerating sending cycles. With the capacity to do crossover cloud arrangements, foundation pioneers have the opportunity and adaptability of a stage that does not bolt clients into cloud process and cloud administrations." 

"We need to make Microsoft Azure the most open and adaptable cloud for undertakings and ISVs to assemble and deal with the applications their clients require," included Gabriel Monroy, Microsoft's lead item administrator of Azure compartments. "Structural on Azure is an energizing headway, empowering clients to utilize CoreOS' undertaking prepared compartment administration stage to effortlessly oversee and scale workloads to fabricate and deal with these applications on Azure." 

Structural single multi-cloud stage makes it simple to run, oversee, scale, and offer assets over an association's half and half cloud to help different application workloads. As per RightScale's 2017 State of the Cloud report, 85 percent of endeavors have a multi-cloud system. CoreOS is meeting this undertaking need with Tectonic, as today ventures are now utilizing Kubernetes in their crossover procedures. 

Sound captivating? Structural is accessible on AWS, Azure and uncovered metal conditions. You can figure out it by utilizing it for nothing on up to 10 hubs.