SRE Load Balancing Techniques: Data Center Load Balancing

SRE    |    Intermediate
  • 14 videos | 1h 3m 4s
  • Includes Assessment
  • Earns a Badge
Rating 4.8 of 177 users Rating 4.8 of 177 users (177)
A Site Reliability Engineer (SRE) must know how to perform load balancing within the data center, both internally and externally. In this course, you'll learn about load balancing, including various methods for balancing loads in the data center. You'll begin by examining what data center load balancing is and its importance to performance, as well as load balancing policies. You'll then learn how to deal with unhealthy tasks using flow control, and tips and tricks for optimizing load balancing. Next, you'll examine methods for limiting connection pools with subsetting, and the various load balancing components. Lastly, you'll learn how to balance loads internally and externally using HTTPS and TCP/UDP, and how to balance loads using SSL and TCP proxy load balancing.

WHAT YOU WILL LEARN

  • Discover the key concepts covered in this course
    Describe the various data center load balancing techniques and how they increase performance
    Describe what is meant by a load balancing policy and how it's applied to load balancers
    Outline a simple approach to dealing with unhealthy tasks using flow control
    List various tips and tricks for working with and optimizing load balancing
    Describe methods for limiting the connection pool with subsetting
    Name and describe the various components that make up load balancing
  • Outline how loads can be balanced using internal https load balancing
    Outline how loads can be balanced using external https load balancing
    Outline how loads can be balanced using internal tcp/udp load balancing
    Outline how loads can be balanced using external tcp/udp load balancing
    Outline how loads can be balanced using ssl proxy load balancing
    Outline how loads can be balanced using tcp proxy load balancing
    Summarize the key concepts covered in this course

IN THIS COURSE

  • 1m 27s
  • 5m 41s
    In this video, you'll learn more about Data Center Load Balancing. You'll learn that the primary goal of any data center is to ingest and serve data as efficiently as possible. The video outlines three different types of Datacenter Load Balancing, including Plain Old Load Balancing or POLB, Network Load Balancing, and HTTP Load Balancing. Then, you'll learn how these load balancing algorithms can be used to improve performance for applications. FREE ACCESS
  • Locked
    3.  Load Balancing Policies
    5m 46s
    Load balancing policies are used to distribute traffic among backend servers in a pool. There are two main types of load balancers, TCP load balancers and HTTP load balancers, and each has its own policies. Load balancer policies may include weighting algorithms, round-robin, least connections, and IP hash. FREE ACCESS
  • Locked
    4.  Load Balancing Flow Control
    5m 58s
    In this video, you will learn how to handle unhealthy backend servers in a load balancing situation. You will learn about Google's Lame Duck approach to backend server health and how it works behind the scenes. You will also discover the benefits of a simple client-side load balancing strategy and discuss its drawbacks. FREE ACCESS
  • Locked
    5.  Load Balancing Tips and Tricks
    4m 38s
    This session will discuss various tips and tricks for load balancing. First, pay attention to the type of sessions you'll be load balancing. If the sessions don't need to be processed by the same server instance, then they can be sticky or independent. However, if processing at the back-end requires knowledge of multiple requests or request history, then session persistence may be required. Additionally, when analyzing your peak load periods, it's important to know when these periods are occurring. FREE ACCESS
  • Locked
    6.  Connection Pool Limits
    4m 31s
    There are two main methods used to limit the number of connections a client can make to a server: subsetting and sticky sessions. With subsetting, clients are limited to connecting to a certain number of backend servers, which frees up resources on the server. The downside is that this can be time-consuming, as each connection needs to be created and torn down. Sticky sessions don't have this limitation, and allow clients to connect to any number of backend servers. FREE ACCESS
  • Locked
    7.  Load Balancing Components
    7m 3s
    In this video, we will discuss the various components that make up a load balancing architecture. We will discuss back end services, forwarding rules, health checks, and URL maps. We will also discuss how IPv6 is supported in Google Cloud and what role certificates and policies play in SSL requests. FREE ACCESS
  • Locked
    8.  Internal HTTPS Load Balancing
    5m 12s
    In this video, you'll learn more about how to outline how loads can be balanced using Internal HTTPS Load Balancing. These load balancers are proxy-based, which means connections are terminated at the load balancer and new connections are made between the load balancer and backend server fronts. Being of the HTTPS protocol, these load balancers work at the Layer 7 or application layer. This means they have access to all aspects of an HTTPS request to make load balancing decisions. FREE ACCESS
  • Locked
    9.  External HTTPS Load Balancing
    4m 35s
    In this video, you'll learn more about how External HTTPS Load Balancing handles web traffic. An External HTTPS Load Balancer ingests traffic from external web clients and then distributes the load among several web frontend instances that are in multiple regions. This is why when you open your web browser and go to a cloud-based website, you don't have to determine which region you should access. The load balancer does it for you. FREE ACCESS
  • Locked
    10.  Internal TCP/UDP Load Balancing
    4m 55s
    In this video, you'll learn about Internal TCP/UDP Load Balancing. You'll learn that an internal TCP/UDP load balancer is meant to balance traffic internal to a region of a cloud system. The primary difference between the TCP/UDP flavor and the HTTPS flavor of internal load balancers is that TCP/UDP works at Layer 4, or the transport layer, while HTTPS works at Layer 7, or the application layer. FREE ACCESS
  • Locked
    11.  External TCP/UDP Load Balancing
    4m 2s
    In this video, you will learn how external TCP/UDP load balancing works. A network load balancer is a type of load balancer that directs both TCP and UDP traffic. Google Cloud's implementation of an external TCP/UDP load balancer is called a regional load balancer. Regional load balancers are limited to one region and one virtual private cloud or VPC. FREE ACCESS
  • Locked
    12.  SSL Proxy Load Balancing
    4m 7s
    SSL Proxy Load Balancing is a technology that allows for the load balancing of encrypted traffic. It is designed to improve the performance of non-HTTP traffic, and can be used with both IPv4 and IPv6. Google offers several benefits with their SSL proxy load balancer, such as automatic security patches and intelligent routing. FREE ACCESS
  • Locked
    13.  TCP Proxy Load Balancing
    3m 51s
    In this video, we will cover how loads can be balanced using TCP Proxy Load Balancing. We will look at how Google implements this load balancer and some of its benefits. FREE ACCESS
  • Locked
    14.  Course Summary
    1m 18s

EARN A DIGITAL BADGE WHEN YOU COMPLETE THIS COURSE

Skillsoft is providing you the opportunity to earn a digital badge upon successful completion on some of our courses, which can be shared on any social network or business platform.

Digital badges are yours to keep, forever.

YOU MIGHT ALSO LIKE

Rating 4.6 of 30 users Rating 4.6 of 30 users (30)
Rating 4.5 of 19 users Rating 4.5 of 19 users (19)
Rating 4.6 of 13 users Rating 4.6 of 13 users (13)

PEOPLE WHO VIEWED THIS ALSO VIEWED THESE

Rating 4.5 of 27 users Rating 4.5 of 27 users (27)
Rating 4.6 of 18 users Rating 4.6 of 18 users (18)
Rating 4.7 of 316 users Rating 4.7 of 316 users (316)