Loader

eBook Path to the Cloud

eBook Path to the Cloud

PREVIOUS BUILD MORE THAN A NETWORK.™ THE PATH TO THE CLOUD AND THE ROLE OF THE NETWORK ANDY INGRAM VP, Data Center and Cloud, Juniper Networks INTRODUCTION Everything is Moving to the Cloud... or is it? 3 Where are Enterprises on the Cloud Path? 4 THE ESSENTIAL COMPONENT The Network is the Essential Component 5 The Evolution of Applications 7 The Key Challenge to Moving to the Cloud 9 APPLICATIONS How Applications are Built Today 10 Exploring the Implications on the Infrastructure 13 ORCHESTRATING AGILITY Orchestrating Agility in the Network 14 Self-Provisioned Clouds and the Role of the Network 17 The Network is the Foundation 18 Guiding Principles 19 BUILDING A NETWORK Building a Modern Network 20 Step 1: Simplify the Network 22 How Things Used to Work 23 Making Simple Even Simpler 25 IP Fabrics 26 Ethernet Fabrics 27 Which Way is Best? 28 Step 2: Securing the Network 29 Automating Network Security 32 Step 3: Automating the Network 34 Junos Automation 36 Automating Operate and Monitor and Build and Provision Tasks 37 Automating Network Orchestration 39 Containing Our Excitement 39 NETWORK VIRTUALIZATION Can we do for the Network what we did for the Server? 41 Network Policy in Virtualization 43 Software Defined Networking 45 SUMMARY In summary 47 < 2 > BUILD MORE THAN A NETWORK.™ EVERYTHING IS MOVING TO THE CLOUD... OR IS IT? In reality, many applications are far from cloudready, and some never will be. But almost every new application will be cloud-enabled. So how do you evolve your data center into a private cloud and drive hybrid IT by connecting to public clouds, yet still cater for the needs of your legacy software? To try and understand the path to the cloud in greater detail, we’ve created this eBook from an extended interview conducted by the team at DatacenterDynamics with Juniper Networks’ Andy Ingram, the Global Vice President for Data Center and Cloud. The Webinar is also available here > Andy Ingram has more than 30 years’ experience in the high-tech industry bringing ground-breaking technology to market. He currently leads Juniper Networks’ overall data center go-to-market strategy, running a worldwide organization focused on providing products for data centers and clouds Andy joined Juniper Networks in 2008 from IGT, where he was the Senior VP of Network Systems. Prior to IGT, Andy held various senior management positions at Sun Microsystems, Hewlett Packard, Cray Research and Sequent Computers, and was involved in the marketing and sales of servers, storage, system software, security products and application software. Andy holds an MBA from the Anderson School at UCLA, and a bachelor degree from the University of Colorado. < 3 > BUILD MORE THAN A NETWORK.™ WHERE ARE ENTERPRISES ON THE CLOUD PATH? In conjunction with DatacenterDynamics, we conducted a survey recently to understand how far along the path to the cloud enterprise organizations really are. While the survey identified many enterprises who were either there or thereabouts, it provided an insight into the proportion of enterprises that still have a long way to go. So, what are the barriers? It’s the same old suspects – data security, data privacy and compliance. In this exclusive eBook, we tackle these issues to help you map out a clear route to achieving successful cloud integration within your enterprise. WE DISCUSS: • Why many applications don’t necessarily move easily into the cloud • The difference between older “mode 1” applications that lack flexibility and modern “mode 2” applications built specifically for the cloud • How to achieve a coherent network environment that supports multiple generations of applications • The importance of topology in the cloud-ready data center network and the choices available • How to use open architecture to drive automation by providing software control of the network and integrating it into the virtualization capabilities of the data center. of enterprises haven’t yet started on a path to the cloud The largest share of respondents are just getting started are only halfway there. So, lots of work to be done, for sure. 36% 39% 30% < 4 > BUILD MORE THAN A NETWORK.™ BUILD MORE THAN A NETWORK.™ THE NETWORK IS THE ESSENTIAL COMPONENT < 5 > Venturing along a path to the cloud, whether that’s a public cloud or a private cloud, requires an essential component to deliver that service – the network. A key challenge today is that the technology in the data center network was almost static for about two decades. But, it’s now changing at a pace that’s so fast, it’s hard for customers to absorb. Every customer we talk to is trying to move to both lower cost and greater agility. The challenge is that when you look inside the network infrastructure, there’s been a significant evolution in deployed applications. They are being built in a new way that’s creating new challenges to the infrastructure. < 6 > BUILD MORE THAN A NETWORK.™ THE EVOLUTION OF APPLICATIONS Once upon a time, applications were clean silos. We used to put a PC in front of the mainframe or the server application that was client-server and considered it a big step forward. But, the internet came along and instead of talking to a few hundred users, we could be talking to thousands or millions. This meant changing the nature of the application in order to use browsers rather than an app on the PC. FINANCE EMPLOYEES CUSTOMERS APPLICATION SILOS ERP MAIL EDI CLIENT/USER OF DATA TRAFFIC FLOWED NORTH TO SOUTH BETWEEN CLIENTS AND SERVERS APPLICATION 95% < 7 > BUILD MORE THAN A NETWORK.™ So, monolithic mainframe apps became client-server apps, which evolved into multiple tiers because it made it easier to scale and change them. The challenge came along when we wanted one application to connect to other applications for data or capabilities. To get around this problem, someone wrote a SQL hack to pull information out of an application and there began the ability to build connections between the applications. So suddenly, we find ourselves in an application environment that looks more like this. EMPLOYEES CUSTOMERS MACHINES SUPPLIERS PARTNERS DATABASE TRANSACTIONS SENSORS INVENTORY DEVICES “ANY TO ANY” SERVICES PORTAL APPLICATIONS DATA SOURCES >75% < 8 > BUILD MORE THAN A NETWORK.™ OF DATA TRAFFIC FLOWS EAST TO WEST BETWEEN APPLICATIONS THE KEY CHALLENGE IN MOVING TO THE CLOUD In this evolved model, there are still tiers but all the components are interconnected. Typically, these connections were not well documented or necessarily well thought through. Attempting to move to the cloud means pulling apart these different pieces, which is difficult to do. Some pieces move more easily than others but it’s almost impossible to move all the pieces in a very complex environment, especially where a large enterprise might be running up to 10,000 applications. TRAFFIC CHANGE An added dimension to this is the very nature of the traffic change. Whereas in a client/server world, all the traffic was north-south. In a modern data center, most of the traffic is east-west. This has an implication on the network itself and on how applications are built. NORTH SOUTH WEST EAST CLIENT/SERVER MODERN DATA CENTER 10,000 applications Up to < 9 > BUILD MORE THAN A NETWORK.™ BUILD MORE THAN A NETWORK.™ HOW APPLICATIONS ARE BUILT TODAY < 10 > MODE 2 • Agile development • Micro-services • Apps resilient • 3x9s infrastructure • Intel servers • KVM, containers • File, object storage • L3 connectivity Fail fast, fix it fast MODE 1 • Waterfall development • Monolithic • Apps not resilient • 5x9s infrastructure • Intel/non-Intel servers • VMware • High-end storage • L2 adiacency If it’s not broken, don’t fix it Today, there are two ways that applications are being built. The first is Mode 1, traditional applications, while the second is Mode 2 – applications being built specifically to improve the agility of the application and the business processes behind it. WATERFALL DEVELOPMENT CONTINUOUS DEVELOPMENT LARGE APPLICATION SMALLER COMPONENTS < 11 > BUILD MORE THAN A NETWORK.™ In Mode 1, everything is treated very gently and carefully. In fact, it could be described as a “if it’s not broken don’t fix it” environment. Large, monolithic applications are hard to change so we tend to follow a waterfall development pattern, where there might be new releases between one and four times a year. In Mode 2, it’s a continuous development cycle, an agile development process that delivers the application. Apps are broken down into smaller components with a modular approach allowing for changes in code that don’t impact the rest of the app. Mode 2 applications are moving at the speed of the internet, thrown out quickly to test them in the real world. If they don’t work, they are fixed quickly. If they succeed, they are scaled quickly. Mode 1 applications are generally not resilient, so tend to be built with bulletproof infrastructure underneath to achieve the 5x9 uptime in a missioncritical environment. A Mode 2 app, however, can be scaled horizontally to ensure it’s always available. If one instance is lost, there might be five other instances running so the service is still alive even if one server is down. Hence, bulletproof infrastructure isn’t needed. That’s a very different cost point, but only if the applications that can live in that environment are written. 5x9 BULLETPROOF INFRASTRUCTURE NEW RELEASES BETWEEN 1 AND 4 TIMES PER YEAR REAL WORLD TESTING SCALED HORIZONTALLY TO ENSURE AVAILABILITY 2.0 < 12 > BUILD MORE THAN A NETWORK.™ EXPLORING THE IMPLICATIONS ON THE INFRASTRUCTURE In Mode 1, on the server side, it’s likely Intel blade servers will be primarily used, but there may still be mainframes and other devices running those applications. In Mode 2, it’s likely to be all Intel, not blade, but lower-cost rack servers, possibly white boxes, because they don’t need to be bulletproof. On the virtualization side, VMware dominates today in the Mode 1 world, but the Mode 2 world is moving away from VMware primarily for cost reasons. Here, we see more KVM from the OpenStack world or the new concept of containers, which has become much more powerful in its ability to easily move applications around in the cloud space. From a storage standpoint, Mode 1 is built around private channel and bulletproof arrays whereas Mode 2 is much more about a network centric storage, file- and object-based. In fact, this might even go back to hyperconverged storage, with storage back in the server and managed as a pool across many servers MODE 1 In terms of the network, Mode 1 would likely see all servers dual connected for the greatest degree of availability and to deliver Layer 2 services due to the Layer 2 adjacency that these components and their interconnections are driving in the network. MODE 2 A pure Mode 2 application, however, requires lower resiliency so servers can be single connected – cutting the number of ports down by half – and it’s built so it lives in a Layer 3 world. It doesn’t even need virtualized networks as all the access control is built into the application itself so these applications are built very differently. 2 3 DUAL CONNECTION SINGLE CONNECTION < 13 > BUILD MORE THAN A NETWORK.™ BUILD MORE THAN A NETWORK.™ ORCHESTRATING AGILITY IN THE NETWORK < 14 > Agility in the enterprise is a key concept. Once upon a time, this was measured in days and weeks. In provisioning a physical server, setting up the networking side may take two weeks to get to the top of the change management queue. In the modern, virtualized world, an application can be deployed onto a virtual server in two minutes, so a further two weeks to organize the network changes wipes out the agility we wanted to achieve. A change in network management that’s overcoming this challenge is the concept of orchestration. The workflows for the network and the storage are built into the workflow that provisions the app under the server, so a single act of automation can deliver the application and orchestrate all the infrastructure of the data center. < 15 > BUILD MORE THAN A NETWORK.™ ORCHESTRATION, AUTOMATION AGILITY – TIME TO PROVISION PHYSICAL SERVER VIRTUAL SERVER NETWORK STORAGE 2 MONTHS TIME 2 WEEKS 2 WEEKS 2 MINUTES < 16 > BUILD MORE THAN A NETWORK.™ SELF-PROVISIONED CLOUDS AND THE ROLE OF THE NETWORK On the path to the cloud, server virtualization has allowed us to consolidate the number of servers to handle server sprawl. In doing so came the realization that more agility can be achieved, especially with automation and orchestration around the server. The next step is the concept of a self-provisioned cloud where the developers or even the users can deploy applications under the auspices of the experts. The aim, of course, is applications can be up and running faster. SO, IN THIS PATH TO THE CLOUD, WHAT IS THE ROLE OF THE NETWORK? Well, of course in the digital world, the network is what connects the services and the data, to the users. Without the network, none of this would actually work. So essentially, the network is the foundation of the cloud in the modern data center and, like the foundation of a house, it’s working best when you never have to think about it. PATH TO THE CLOUD 1. Legacy data center 2. Virtualized data center 3. Cloud data center SERVER VIRTUALIZATION ORCHESTRATION PUBLIC PRIVATE ARCHITECTURE, AUTOMATION CONSOLIDATION Low Cost OPTIMIZATION Greater Agility & Availability SELF PROVISIONED CLOUDS 1 2 3 < 17 > BUILD MORE THAN A NETWORK.™ THE NETWORK IS THE FOUNDATION We don’t want the network to get in the way of what you’re trying to get done so a network must deliver three things: 1. The application service – connectivity, availability, performance, security. 2. Agility across two timeframes – a short-term time frame in deploying an application as rapidly as possible and a longer-term time frame in ensuring an application is agnostic. Today, we’re running applications that five years ago we could not imagine. Five years from now, the same thing is going to happen – applications that today we can’t imagine. We need to ensure the network can deliver that today and five years from now. 3. As fast, reliable and agile as possible. Here, the trade-off is cost. BETTER USER EXPERIENCE – “CUSTOMER” SATISFACTION & USER PRODUCTIVITY APPLICATION SERVICE DELIVERY • CONNECTIVITY • AVAILABILITY • PERFORMANCE • SECURITY AGILITY • TIME TO APP SERVICE • APPLICATION AGNOSTIC LOWER COST • CAPEX OPTIMIZATION • OPEX REDUCTION • IMPROVE ROI < 18 > BUILD MORE THAN A NETWORK.™ GUIDING PRINCIPLES In affirming the role of the network, there are some guiding principles that we think are key. Number one is simplicity. Data centers are, of course, complex but we strive for simplicity. By simplifying the network, particularly the physical topology, it becomes faster, more reliable, costs less and requires less power space and cooling. The second guiding principle is security. The more open we are in reaching out to customers, the more vulnerable we are, so security has to be built into the network to control access and protect data in flight. Finally, the third principle is in ensuring the network is built on open standards. OPEN Embrace open standards Enable choice Alleviate lock-in Standard APIs SIMPLE Easy to buy Easy to deploy Easy to operate Easy to secure SECURE Micro-perimeters Policy management Cloud services Security intelligence < 19 > BUILD MORE THAN A NETWORK.™ BUILD MORE THAN A NETWORK.™ BUILDING A MODERN NETWORK < 20 > The pace of change in the network has been phenomenal and it’s hard to absorb. So, we will make this simpler by covering the three steps in more detail: 1. How do you simplify the network? 2. How do you secure the network? 3. How do you automate the operations around the network? < 21 > BUILD MORE THAN A NETWORK.™ STEP 1: SIMPLIFY THE NETWORK Simplifying the network is a vital first step to achieving new enterprise IT, as it makes the next two steps – security and automation – much easier to achieve. DATA CENTER DCI DATA CENTER NETWORK INFRASTRUCTURE < 22 > BUILD MORE THAN A NETWORK.™ HOW THINGS USED TO WORK The old model in data centers is that we built a Layer 2 network. These have some strengths and some weaknesses. The strength is that Ethernet is ubiquitous plug and play. Everything can work with it and it works nicely with all existing applications. The downside is the way in which it operates. Firstly, it doesn’t tolerate loops, so we can’t deal with two or more paths to the same location. So, the old model dictated that we suppress the loops by running a spanning tree, which in effect turned off half the bandwidth in the network. Secondly, a Layer 2 network is essentially a set of autonomous devices that are managed independently. They cooperate with each other by communicating addressing information through the data plane. Essentially they yell at each other. In a small network, this is tolerable, but doesn’t work in a big network. So, we are taking lessons from the wide area network and building coherence into the network. This has two aspects. Firstly, there’s a control plane that will suppress loops in a more intelligent way, so all available bandwidth can be used and it passes addressing in a much more efficient and effective way so we’re not using broadcast storms across the network. Secondly, there’s a common point of system management as opposed to a collection of devices, which means automation can be leveraged much more effectively. SIMPLIFY THE NETWORK OLD MODEL Deploy individual network elements Autonomous devices L2 data plane driven NEW MODEL Deploy a coherent network Shared distributed control plane Common management plane < 23 > BUILD MORE THAN A NETWORK.™ Here, the old networking model is the tree structure. When Ethernet first showed up in the data center, that’s how we built the network. But, there’s a problem. In a world where the traffic is trying to go east and west, not north and south, this is a very inefficient architecture. Different latencies occur, which have a dramatic impact on the performance of the application. At scale, across many thousands of applications, this is a real problem. So, taking another lesson from the wide area network, the Clos design (named after its pioneer, Charles Clos, a PhD in Bell Labs who defined how to build a non-blocking analog phone network using a minimum number of nodes) came to the fore. In the data center, this manifests as a spine and leaf. In the past, this was hard to do because we didn’t necessarily have the ASIC density to build the spine. No longer is this the case, however, and we can absolutely build every new data center as a spine and leaf. As a vendor this makes it easier, but there’s some changes to consider. Firstly, we’re no longer using the same network devices in a local area network in the campus environment that we are in the data center. The needs for speed, functionality, density, cost point – they’re all fundamentally different. The next thing is we want to add coherence to this architecture. We can add a control plane in the spine and leverage MC-LAG (Multi-Chassis Link Aggregation Group) to suppress the loops without having to run a spanning tree and yet use all the bandwidth so all links are active but the individual devices are still managed separately. Importantly for us, the simplified network design is open so vendor solutions can be mixed. And, we can scale this to a pretty large size. But, that’s still not as simple as we can make it. SIMPLIFY THE NETWORK Coherent architecture Building blocks Topology EDGE SPINE ACCESS < 24 > BUILD MORE THAN A NETWORK.™ MAKING SIMPLE EVEN SIMPLER Getting to the next level of simple involves building an Ethernet fabric, an approach Juniper pioneered. Essentially, this is putting a control plane around all the switching devices inside of the data center. If done properly, this is the simplest design because it ends up being self-provisioning. Think of it as a giant chassis. To add a new leaf, physically install it, cable it up, turn it on and everything else is done automatically. It gets discovered, it gets its software load, the uplinks are configured and now the ports will take on whatever configuration that you preprogrammed. Beyond this, the wide area Layer 3 types of control planes like BGP, MPLS and so forth can be considered and we can build an IP fabric. This now manifests as a pure routed environment, all Layer 3. But, sometimes, it’s not quite as simple. So, let’s cover which approach to take and when to use an Ethernet fabric versus an IP fabric. COHERENT ARCHITECTURES QFABRIC  JUNOS FUSION MULTI-TIER w/MC-LAG (L2/L3) ETHERNET FABRIC (L2/L3) IP FABRIC (ALL L3) MC-LAG CONFIG SYNCH VIRTUAL CHASSIS FABRIC OPENCLOS < 25 > BUILD MORE THAN A NETWORK.™ IP FABRICS IP Fabric design was pioneered in the data center by the likes of Amazon and Google because they had massive scale problems to solve. But, a pure IP Fabric only runs applications that can live in a Layer 3 world. An overlay can be added, such as a VX LAN using EVPN as the control plane, which means Layer 2 services can be run that allow Layer 2 centric applications to be delivered in a Layer 3 environment. But, this creates an issue of complexity, which represents one of the biggest challenges in this area. MULTI-TIER w/MC-LAG (L2/L3) ETHERNET FABRIC (L2/L3) IP FABRIC (ALL L3) APPS REQUIRE L2 ADJACENCY APPS ARE L3 CENTRIC SCALE VIRTUAL NETWORK FABRIC OVERLAY < 26 > BUILD MORE THAN A NETWORK.™ ETHERNET FABRICS Whilst not necessarily scaling as large as IP Fabrics, they are the simplest. The challenge is in the software that makes that happen tends to make it less open, which is less effective in achieving good ease of management. While an IP Fabric is a very open environment using open protocols everywhere, it doesn’t have a swathe of management magic on top. So, the trade-off between the two tends to be one of complexity versus an open approach. In a Mode 1 data center that’s all about reliability and stability (not necessarily agility), Ethernet Fabric can really shine. But, if agility is a big priority, then IP Fabric with a SDN overlay is a big consideration. Private clouds and certainly public clouds tend to be built with an IP Fabric with an overlay with a controller that helps to automate the orchestration of the network, speed up deployment of applications and create greater business agility. MODE 1 DATA CENTERS MODE 2 PRIVATE CLOUD DATA CENTERS PUBLIC CLOUD DATA CENTERS MULTI-TIER w/MC-LAG (L2/L3) ETHERNET FABRIC (L2/L3) IP FABRIC (ALL L3) VIRTUAL NETWORK FABRIC OVERLAY < 27 > BUILD MORE THAN A NETWORK.™ WHICH WAY IS BEST? It is possible to build one network that has both IP Fabric and Ethernet Fabric behaviours. So, for example, with the same spine, you could support a set of pods in the data center that are Mode 1 pods using an Ethernet Fabric. Meanwhile, another set of pods could be used to start small and build up a private cloud with Mode 2 applications. Here, this is where an IP Fabric with an overlay in the controller might be deployed. Over time, infrastructure can be migrated from Mode 1 to Mode 2 not by physically moving anything (because it’s the same devices, cabling, spine) but by changing the software behavior that runs on top of it. The same network can also support a legacy environment so existing switches might sit in front of mainframes with MC-LAG or extended Ethernet Fabric bringing them into this environment. Of course, as a cloud, connections out to other data centers and to public clouds are key so applications can move between these different environments. And while these might be different environments and cultures, it will run as one network because all these applications need to talk to each other in a reliable and performant fashion. ONE NETWORK PATH TO THE CLOUD LEGACY PODS MC-LAG/E FABRIC MODE 1 PODS ETHERNET FABRIC MOD 2 PODS IP FABRIC w/OVERLAY WAN & INTERNET OTHER DATA CENTERS < 28 > BUILD MORE THAN A NETWORK.™ STEP 2: SECURING THE NETWORK Let’s now cover the topic of security as the next step in changing business models in our path to the cloud. DATA CENTRE DCI DATA CENTRE NETWORK INFRASTRUCTURE 1. SIMPLIFY THE NETWORK 2. SECURE THE NETWORK NETWORK SECURITY < 29 > BUILD MORE THAN A NETWORK.™ Previously, we described the security model in most enterprise organizations as a castle model. While there was typically a firewall in front of each application, this has now been pushed to the edge of the data center. Essentially, this creates a DMZ at the edge like the walls of a large castle. Today, enterprise security is more like a hotel model. Here, we walk into the lobby where there’s security, we use room key to get in the elevator, same to access our room and perhaps in the closet there’s a safe too. With so many different layers of security, it supports the concept of micro segmentation. But this creates a few issues. For instance, rules tend to check into our hotel model, but they never leave, so over time, applications have come and gone but rules instantiated in the firewalls to support them are still there. The workaround for this tends to be to move the rules in front of the individual applications. So, in addition to the perimeter of the data centers, we build protection around individual pieces of the application – a concept referred to as micro segmentation. Of course, the key is how to manage this increased number of partitions with segments. SECURE – A NEW MODEL FOR THE CLOUD CASTLE MODEL HOTEL MODEL < 30 > BUILD MORE THAN A NETWORK.™ MICRO-SEGMENTS EMPLOYEES CUSTOMERS MACHINES SUPPLIERS PARTNERS DATABASE TRANSACTIONS SENSORS INVENTORY DEVICES PORTAL APPLICATIONS DATA SOURCES ANALYSIS REPORTING CRM PURCHASING MAIL HR INVENTORY ORDER PROCESSING < 31 > BUILD MORE THAN A NETWORK.™ < 32 > BUILD MORE THAN A NETWORK.™ AUTOMATING NETWORK SECURITY Managing the increased complexity in network security tends to be taken care of with automation. While we continue to have protection at the edge of the data center, we can add more levels like virtual firewalls or, more likely, access control built into virtual switches so if threats are detected, they can be shut down at the firewall level or at the virtual switch level, but also at the physical switch level to prevent threats from propagating. A policy engine controls all this, of course, augmented by services in the cloud for investment protection and threat intelligence. EVOLUTION OF DC SECURITY FINANCE EMPLOYEES CUSTOMERS ERP MAIL EDI ERP MAIL EDI PORTAL APPLICATION DATA SOURCES SECURITY ENFORCEMENT ROUTER (MX SERIES) [Stateless ACL L2-3] PHYSICAL FIREWALL APPLIANCE (SRX SERIES) [Stateful L2-L7 firewall] APIs & libraries exposed to operators, orchestration & automation platforms To device-specific APIs DC SPINE SWITCH (QFX10000) [Stateful ACL and/or L4-7 firewall] DC LEAF SWITCH (QFX5100) [Stateful ACL and/or L4-7 firewall] VIRTUALIZED HOST WITH VIRTUAL FIREWALL (dFW, CONTROL vROUTER, vSRX) [Stateful ACL and/or L4-7 firewall] THREAT INTELLIGENCE, PROTECTION CENTRAL POLICY ENGINE SD ND VD CONTRAIL SPACE < 33 > BUILD MORE THAN A NETWORK.™ STEP 3: AUTOMATING THE NETWORK The final step in our path to the cloud is centred on how we automate, not just the network, but in deploying the applications under the network as well. DATA CENTRE DCI DATA CENTRE ORCHESTRATION NETWORK AUTOMATION AND ANALYTICS NETWORK VIRTUALIZATION NETWORK INFRASTRUCTURE 1. SIMPLIFY THE NETWORK 2. SECURE THE NETWORK 3. AUTOMATE OPERATIONS NETWORK SECURITY < 34 > BUILD MORE THAN A NETWORK.™ In a data center, there are essentially three set tasks to automate. The old model involved vendors providing tools to manage devices. But, data center managers don’t want to manage devices, they want to deploy applications. So, the new concept is about automating the workflow delivering the application. Significant benefits arise from this. Firstly, repeatability is important. More than 50% of network outages can be traced to human intervention, so in avoiding these mistakes, we try to ensure a more reliable and agile infrastructure with lower operating costs. In the three sets of tasks illustrated (build and provision; operator monitoring and orchestration), we think of the first two as network-centric, bottom-up tasks. Orchestrations, on the other hand, are all about the application, which is much more about driving things from the top down and considering the entire infrastructure in the data center. AUTOMATE OPERATIONS AUTOMATE WORKFLOWS OLD MODEL Manage network devices NEW MODEL Automate the workflow of delivering the application BENEFITS Repeatability More reliable More agile Lower operating cost ORCHESTRATE OPERATE & MONITOR BUILD & PROVISION < 35 > BUILD MORE THAN A NETWORK.™ JUNOS AUTOMATION At Juniper, automation starts with the Junos operating system. Every feature in Junos is accessed through a programmatic interface. We then add APIs that include the CLI as well as APIs for programmatic access such as NetConf or Open Config – the kind of innovation we support through our commitment to open standards. On top of this, libraries are added that allow access to some of the new open source tools available today, an evolution that’s occurred in the last 8 years or so. Many open source tools are expanding to include the network – often simple to use and written at a high level of abstraction. THE JUNOS AUTOMATION STACK PYTHON SCRIPTS ANSIBLE SALTSTACK RUBY SCRIPTS PUPPET CHEF JSNAPy ASAP JET PyEZ FRAMEWORK RUBYEZ LIBRARY PYTHON / SLAX gRPC REST JTI SENSOR CLI SNMP RO JUNOSCRIPT XML-RPC NETCONF, OPENCONFIG JSD MQTT EPHEMERAL DB JUNOS YANG JTI CHASSIS DATA PLANE (PFE) < 36 > BUILD MORE THAN A NETWORK.™ AUTOMATING OPERATE AND MONITOR AND BUILD AND PROVISION TASKS The target here is to automate the 20% of tasks that take 80% of the time, freeing up people to do more important activities and limiting the kind of mistakes that go along with it. Sometimes, we can build this into the fabric itself so an Ethernet Fabric, properly designed, is self-provisioning. Tools can also be deployed, like Junos Space Network Director or open source tools like Ansible, Salt, Python, Puppet and Chef. 20% TASKS 80% TIME AUTOMATE TASK Frees up time Limits mistakes < 37 > BUILD MORE THAN A NETWORK.™ TWO APPROACHES JUNOS AUTOMATION STACK NETWORK COHERENCE CHEF PUPPET ANSIBLE SALTSTACK JSON PYTHON Junos Space Network Director Junos Space Security Director JUNOS BOTTOM UP < 38 > BUILD MORE THAN A NETWORK.™ AUTOMATING NETWORK ORCHESTRATION Automating orchestrations is a different challenge because it starts with the application then works its way down. So, the first decision to make is in how you build the orchestration software. The best off-the-shelf experience is VMware. It’s proven to help provision applications onto servers. In Mode 1 it dominates, but, there are two downsides. It only provision things on the ESXi so it’s very specific to a VMware environment. Secondly, VMware is rightly very proud of their software so it can be expensive. And so, we do see enterprises that are building out private clouds and public clouds looking in other directions to get that work done. One of these directions is OpenStack, but note here that some assembly is required. CONTAINING OUR EXCITEMENT Where the Open Source community is attempting to emulate AWS, they succeed in terms of innovation. Integration, however, is more challenging. Having said this, containers represents an interesting approach, allowing library sets integrated specifically for the application. Docker is the tool that manifests containers and can run on Linux, on Windows, in ESXi, inside your own data center or in a public cloud, so it’s a very flexible environment in terms of where and how it’s run. With Kubernetes as the orchestration environment and approaches like Red Hat OpenShift, this off-the-shelf type of experience could be the future direction for a Mode 2 type of world. 3 < 39 > BUILD MORE THAN A NETWORK.™ BUILD MORE THAN A NETWORK.™ NETWORK VIRTUALIZATION < 40 > CAN WE DO FOR THE NETWORK WHAT WE DID FOR THE SERVER? So, in considering network orchestration, is it possible to virtualize the network in the same way we’ve virtualized servers and achieve the same benefits? < 41 > BUILD MORE THAN A NETWORK.™ VIRTUAL NETWORKS NETWORK AND PACKET POLICY SERVICE INJECTION GATEWAYS • Segmenting the network • L2 Services • Network policy for topology and packet for traffic control • Network functions and services stitched to topology • Connects virtual and physical domains Here, we’re not orchestrating the physical network, but the virtual slice of the network, which is where SDN comes into play. While the term Software-Defined Network is a bit unfortunate because software has been defining network behavior for decades, the way it manifests itself in the data center today is as a virtual overlay that allows us to automate the orchestration of the network based upon the objective with each application. Essentially, there are four things that it does; build the virtual network, set up network policy, allows for service injection and automates gateways. Let’s describe this in more detail. We’ve had virtual networks in data centers for decades – VLANs – but the challenge here is at the endpoint of the virtual network of the tunnel that we’re going to build, i.e. the physical network, not the application. When an application is deployed, the VLAN is defined as a separate act. If an application is moved, the VLAN has to be defined in that new physical location. Furthermore, if the application moves somewhere that VLAN didn’t exist, of course the app stops running, and if the app is deleted, the VLAN has to be deleted separately. In the new world, we’re using a technology called VX LAN. This starts the tunnel at the virtual port and that virtual port is connected to the container that contains the application. The endpoint, virtual tunnel endpoint (VTEP), is connected to the application and the software behaves that way as well so when the application is deployed, the virtual network automatically instantiates itself in the network. If the app is moved, the endpoint automatically moves with it and if it’s deleted, the virtual network would go away. NETWORK VIRTUALIZATION VLANs < 42 > BUILD MORE THAN A NETWORK.™ A1 B1 A2 B2 NETWORK POLICY IN VIRTUALIZATION Here, we can create a red network and a green network. While developers decide what pieces of their application they want to put in the virtual networks, the default policy is that no traffic can move between the red network and the green network. With new applications, again developers can define the egress and ingress points and then inject services into it. There can be predefined sources like firewalls, IDP devices, load balancers, NAT, WAN compression so when the application gets deployed, the network automatically behaves in the right fashion. NETWORK POLICY SERVICE CHAINING FW NAT IDP LB VIRTUAL NETWORK A VIRTUAL NETWORK B VIRTUAL NETWORK A VIRTUAL NETWORK B < 43 > BUILD MORE THAN A NETWORK.™ A1 A2 B1 B2 A1 A2 B1 B2 VXLAN TUNNEL A final benefit of this environment is something we call gateways. The gateway allows us to connect to the rest of the world, so we’re simply building a VX LAN tunnel between two different virtual switches or virtual ports across the physical network, which we call the underlay network. It can be any devices that can pass Ethernet, which makes it simple, but it can be problematic. Here, we’ve now created a virtual network and a physical network that needs to be debugged, so telemetry and correlation of events is going to be important. But what happens if we include a bare-metal server, so we have a service that does not have a virtual switch, nor a virtual port, and we want to communicate back to our virtualized server? VIRTUALIZED SERVER VIRTUALIZED SERVER BARE METAL SERVER GATEWAYS L3 GW WAN GW GW CONTROLLER < 44 > BUILD MORE THAN A NETWORK.™ One way to do it is to run it to a software gateway and back down to a bare-metal server. In practice, though, this is not a good idea. What kind of applications sit in bare metal servers? Generally, two categories – apps that are generally peripheral and then huge Oracle databases that everything connects to. It’s not a good idea to run every database transaction through a dog leg like this. Our preference is to put that gateway in the network device so the traffic coming through is converted from a VX LAN to, in this case, a VLAN. Or, by going outside the data center, we would go to the edge router and convert from VX LAN to, say, MPLS or L3 VPN, whatever is preferred in the wide area. Now I can define those gateways in the static way but I have to make sure that the devices themselves will support the VTEP. And for automation, I can connect it back to the controller so the controller is connected then to all the various VTEPs both in the servers and the virtual switch as well as in the physical switches. In doing so, I can build an environment that’s very powerful. SOFTWARE DEFINED NETWORKING At Juniper, we support SDN choices – NSX and Contrail. If you’ve decided that VMware and VCenter are going to be your center of gravity for Orchestration then NSX is absolutely the right answer. If you believe that you’re moving to KVM, OpenStack, containers, these type of things, then Contrail is a powerful choice. But, either way, that’s going to be how you achieve agility in the network for Mode 2 applications. NSX CONTRAIL < 45 > BUILD MORE THAN A NETWORK.™ BUILD MORE THAN A NETWORK.™ SUMMARY IN SUMMARY At Juniper, we believe in building coherent networks to help simplify both the physical topology as well as the operations of the network itself. Once having done that, we can then secure it. And, by moving to micro parameters, we can do a much better job of protecting the applications. Then finally, automation, particularly as we get into micro parameters, micro segments and the need for agility, is where SDN really pays off. DATA CENTER DCI DATA CENTER ORCHESTRATION NETWORK AUTOMATION AND ANALYTICS NETWORK VIRTUALIZATION NETWORK INFRASTRUCTURE 1. SIMPLIFY THE NETWORK 2. SECURE THE NETWORK 3. AUTOMATE OPERATIONS NETWORK SECURITY < 47 > BUILD MORE THAN A NETWORK.™ NEXT BUILD MORE THAN A NETWORK. < 48 > ™ Corporate and sales Headquarters Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, CA 94089 USA Phone: 888-JUNIPER (888-586-4737) or +1.408.745.2000 Fax: +1.408.745.2100 APAC and EMEA Headquarters Juniper Networks International B.V. Boeing Avenue 240 119 PZ Schipol-Rijk Amsterdam, The Netherlands Phone: +31.0.207.125.700 Fax: +31.0.207.125.701 Learn more at: juniper.net/cloud-grade-networking Copyright 2017 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc. In the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. 7400067-001-EN