Open-Source Service Discovery

Service discovery is a key component of most distributed systems and service oriented architectures. The problem seems simple at first: How do clients determine the IP and port for a service that exist on multiple hosts?

Usually, you start off with some static configuration which gets you pretty far. Things get more complicated as you start deploying more services. With a live system, service locations can change quite frequently due to auto or manual scaling, new deployments of services, as well as hosts failing or being replaced.

Dynamic service registration and discovery becomes much more important in these scenarios in order to avoid service interruption.

This problem has been addressed in many different ways and is continuing to evolve. We’re going to look at some open-source or openly-discussed solutions to this problem to understand how they work. Specifically, we’ll look at how each solution uses strong or weakly consistent storage, runtime dependencies, client integration options and what the tradeoffs of those features might be.

We’ll start with some strongly consistent projects such as ZookeeperDoozer and Etcd which are typically used as coordination services but are also used for service registries as well.

We’ll then look at some interesting solutions specifically designed for service registration and discovery. We’ll examine Airbnb’s SmartStackNetflix’s EurekaBitly’s NSQSerfSpotify and DNS and finally SkyDNS.

The Problem

There are two sides to the problem of locating services. Service Registration and Service Discovery.

  • Service Registration - The process of a service registering its location in a central registry. It usually register its host and port and sometimes authentication credentials, protocols, versions numbers, and/or environment details.
  • Service Discovery - The process of a client application querying the central registry to learn of the location of services.

Any service registration and discovery solution also has other development and operational aspects to consider:

  • Monitoring - What happens when a registered service fails? Sometimes it is unregistered immediately, after a timeout, or by another process. Services are usually required to implement a heartbeating mechanism to ensure liveness and clients typically need to be able to handle failed services reliably.
  • Load Balancing - If multiple services are registered, how do all the clients balance the load across the services? If there is a master, can it be deteremined by a client correctly?
  • Integration Style - Does the registry only provide a few language bindings, for example, only Java? Does integrating require embedding registration and discovery code into your application or is a sidekick process an option?
  • Runtime Dependencies - Does it require the JVM, Ruby or something that is not compatible with your environment?
  • Availability Concerns - Can you lose a node and still function? Can it be upgraded without incurring an outage? The registry will grow to be a central part of your architecture and could be a single point of failure.

General Purpose Registries

These first three registries use strongly consistent protocols and are actually general purpose, consistent datastores. Although we’re looking at them as service registries, they are typically used for coordination services to aid in leader election or centralized locking with a distributed set of clients.

Zookeeper

Zookeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. It’s written in Java, is strongly consistent (CP) and uses the Zab protocol to coordinate changes across the ensemble (cluster).

Zookeeper is typically run with three, five or seven members in the ensemble. Clients use language specific bindings in order to access the ensemble. Access is typically embedded into the client applications and services.

Service registration is implemented with ephemeral nodes under a namespace. Ephemeral nodes only exist while the client is connected so typically a backend service registers itself, after startup, with its location information. If it fails or disconnects, the node disappears from the tree.

Service discovery is implemented by listing and watching the namespace for the service. Clients receive all the currently registered services as well as notifications when a service becomes unavailable or new ones register. Clients also need to handle any load balancing or failover themselves.

The Zookeeper API can be difficult to use properly and language bindings might have subtle differences that could cause problems. If you’re using a JVM based language, the Curator Service Discovery Extension might be of some use.

Since Zookeeper is a CP system, when a partition occurs, some of your system will not be able to register or find existing registrations even if they could function properly during the partition. Specifically, on any non-quorum side, reads and writes will return an error.

Doozer

Doozer is a consistent, distributed data store. It’s written in Go, is strongly consistent and uses Paxos to maintain consensus. The project has been around for a number of years but has stagnated for a while and now has close to 160 forks. Unfortunately, this makes it difficult to know what the actual state of the project is and whether is is suitable for production use.

Doozer is typically run with three, five or seven nodes in the cluster. Clients use language specific bindings to access the cluster and, similar to Zookeeper, integration is embedded into the client and services.

Service registration is not as straightforward as with Zookeeper because Doozer does not have any concept of ephemeral nodes. A service can register itself under a path but if the service becomes unavailable, it won’t be removed automatically.

There are a number of ways to address this issue. One option might be to add a timestamp and heartbeating mechanism to the registration process and handle expired entries during the discovery process or with another cleanup processes.

Service discovery is similar to Zookeeper in that you can list all the entries under a path and then wait for any changes to that path. If you use a timestamp and heartbeat during registration, you would ignore or delete any expired entries during discovery.

Like Zookeeper, Doozer is also a CP system and has the same consequences when a partition occurs.

Etcd

Etcd is a highly-available, key-value store for shared configuration and service discovery. Etcd was inspired by Zookeeper and Doozer. It’s written in Go, uses Raft for consensus and has a HTTP+JSON based API.

Etcd, similar to Doozer and Zookeeper, is usually run with three, five or seven nodes in the cluster. Clients use a language specific binding or implement one using an HTTP client.

Service registration relies on using a key TTL along with heartbeating from the service to ensure the key remains available. If a services fails to update the key’s TTL, Etcd will expire it. If a service becomes unavailable, clients will need to handle the connection failure and try another service instance.

Service discovery involves listing the keys under a directory and then waiting for changes on the directory. Since the API is HTTP based, the client application keeps a long-polling connection open with the Etcd cluster.

Since Etcd uses Raft, it should be a strongly-consistent system. Raft requires a leader to be elected and all client requests are handled by the leader. However, Etcd also seems to support reads from non-leaders using this undocumented consistent parameter which would improve availabilty in the read case. Writes would still need to be handled by the leader during a partition and could fail.

Single Purpose Registries

These next few registration services and approaches are specifically tailored to service registration and discovery. Most have come about from actual production use cases while others are interesting and different approaches to the problem. Whereas Zookeeper, Doozer and Etcd could also be used for distributed coordination, these solutions don’t have that capability.

Airbnb’s SmartStack

Airbnb’s SmartStack is a combination of two custom tools, Nerve and Synapse that leverage haproxy and Zookeeper to handle service registration and discovery. Both Nerve and Synapse are written in Ruby.

Nerve is a sidekick style process that runs as a separate process alongside the application service. Nerve is reponsible for registering services in Zookeeper. Applications expose a /health endpoint, for HTTP services, that Nerve continuously monitors. Provided the service is available, it will be registered in Zookeper.

The sidekick model eliminates the need for a service to interact with Zookeeper. It simply needs a monitoring endpoint in order to be registered. This makes it much easier to support services in different languages where robust Zookeeper binding might not exist. This also provides many of benefits of the Hollywood principle.

Synapse is also a sidekick style process that runs as a separate process alongside the service. Synapse is responsible for service discovery. It does this by querying Zookeeper for currently registered services and reconfigures a locally running haproxy instance. Any clients on the host that need to access another service always accesses the local haproxy instance which will route the request to an available service.

Synapse’s design simplifies service implementations in that they do not need to implement any client side load balancing or failover and they do not need to depend on Zookeepr or its language bindings.

Since SmartStack relies on Zookeeper, some registrations and discovery may fail during a partition. They point out that Zookeepr is their “Achilles heel” in this setup. Provided a service has been able to discover the other services, at least once, before a partition, it should still have a snapshot of the services after the partition and may be able to continue operating during the partition. This aspect improves the availability and reliability of the overall system.

Update: If you’re intested in a SmartStack style solution for docker containers, check out docker service discovery.

Netflix’s Eureka

Eureka is Netflix’s middle-tier, load balancing and discovery service. There is a server component as well as a smart-client that is used within application services. The server and client are written in Java which means the ideal use case would be for the services to also be imlemented in Java or another JVM compatible language.

The Eureka server is the registry for services. They recommend running one Eureka server in each availability zone in AWS to form a cluster. The servers replicate their state to each other through an asynchronous model which means each instance may have a slightly, different picture of all the services at any given time.

Service registration is handled by the client component. Services embed the client in their application code. At runtime, the client registers the service and periodically sends heartbeats to renew its leases.

Service discovery is handled by the smart-client as well. It retrieves the current registrations from the server and caches them locally. The client periodically refreshes its state and also handles load balancing and failovers.

Eureka was designed to be very resilient during failures. It favors availabilty over strong consistency and can operate under a number of different failure modes. If there is a partition within the cluster, Eureka transitions to a self-preservation state. It will allow services to be discovered and registered during a partition and when it heals, the members will merge their state again.

Bitly’s NSQ lookupd

NSQ is a realtime, distributed messaging platform. It’s written in Go and provides an HTTP based API. While it’s not a general purpose service registration and discovery tool, they have implemented a novel model of service discovery in theirnsqlookupd agent in order for clients to find nsqd instances at runtime.

In an NSQ deployment, the nsqd instances are essentially the service. These are the message stores. nsqlookupd is the service registry. Clients connect directly to nsqd instances but since these may change at runtime, clients can discover the available instances by querying nsqlookupd instances.

For service registration, each nsqd instance periodically sends a heartbeat of its state to each nsqlookupd instance. Their state includes their address and any queues or topics they have.

For discovery, clients query each nsqlookupd instance and merge the results.

What is interesting about this model is that the nsqlookupd instances do not know about each other. It’s the responsibility of the clients to merge the state returned from each stand-alone nsqlookupd instance to determine the overal state. Because each nsqd instance heartbeats its state, each nsqlookupd eventually has the same information provided each nsqd instance can contact all available nsqlookupd instances.

All the previously discussed registry components all form a cluster and use strong or weakly consistent consensus protocols to maintain their state. The NSQ design is inherently weakly consistent but very tolerant to partitions.

Serf

Serf is a decentralized solution for service discovery and orchestration. It is also written in Go and is unique in that uses a gossip based protocol, SWIM for membership, failure detection and custom event propogation. SWIM was designed to address the unscalability of traditional heart-beating protocols.

Serf consists of a single binary that is installed on all hosts. It can be run as an agent, where it joins or creates a cluster, or as a client where it can discover the members in the cluster.

For service registration, a serf agent is run that joins an existing cluster. The agent is started with custom tags that can identify the hosts role, env, ip, ports, etc. Once joined to the cluster, other members will be able to see this host and it’s metadata.

For discovery, serf is run with the members command which returns the current members of the cluster. Using the members output, you can discover all the hosts for a service based on the tags their agent is running.

Serf is a relatively new project and is evolving quickly. It is the only project in this post that does not have a central registry architectural style which makes it unique. Since it uses a asynchronous, gossip based protocol, it is inherently weakly-consistent but more fault tolerant and available.

Spotify and DNS

Spotify described their use of DNS for service discovery in their post In praise of “boring” technology. Instead of using a newer, less mature technology they opted to build on top of DNS. Spotify views DNS as a “distributed, replicated database tailored for read-heavy loads.”

Spotify uses the relatively unknown SRV record which is intended for service discovery. SRV records can be thought of as a more generalized MX record. They allow you to define a service name, protocol, TTL, priority, weight, port and target host. Basically, everything a client would need to find all available services and load balance against them if necessary.

Service registration is complicated and fairly static in their setup since they manage all zone files under source control. Discovery uses a number of different DNS client librarires and custom tools. They also run DNS caches on their services to minimize load on the root DNS server.

They mention at the end of their post that this model has worked well for them but they are starting to outgrow it and are investigating Zookeeper to support both static and dynamic registration.

SkyDNS

SkyDNS is a relatively new project that is written in Go, uses RAFT for consensus and also provides a client API over HTTP and DNS. It has some similarities to Etcd and Spotify’s DNS model and actually uses the same RAFT implementation as Etcd, go-raft.

SkyDNS servers are clustered together, and using the RAFT protocol, elect a leader. The SkyDNS servers expose different endpoints for registration and discovery.

For service registration, services use an HTTP based API to create an entry with a TTL. Services must heartbeat their state periodically. SkyDNS also uses SRV records but extends them to also support service version, environment, and region.

For discovery, clients use DNS and retrieve the SRV records for the services they need to contact. Clients need to implement any load balancing or failover and will likely cache and refresh service location data periodically.

Unlike Spotify’s use of DNS, SkyDNS does support dynamic service registration and is able to do this without depending on another external service such as Zookeeper or Etcd.

If you are using dockerskydock might be worth checking out to integrate your containers with SkyDNS automatically.

Overall, this is an interesting mix of old (DNS) and new (Go, RAFT) technology and will be interesting to see how the project evolves.

Summary

We’ve looked at a number of general purpose, strongly consistent registries (Zookeeper, Doozer, Etcd) as well as many custom built, eventually consistent ones (SmartStack, Eureka, NSQ, Serf, Spotify’s DNS, SkyDNS).

Many use embedded client libraries (Eureka, NSQ, etc..) and some use separate sidekick processes (SmartStack, Serf).

Interestingly, of the dedicated solutions, all of them have adopted a design that prefers availability over consistency.

Name Type AP or CP Language Dependencies Integration
Zookeeper General CP Java JVM Client Binding
Doozer General CP Go   Client Binding
Etcd General Mixed (1) Go   Client Binding/HTTP
SmartStack Dedicated AP Ruby haproxy/Zookeeper Sidekick (nerve/synapse)
Eureka Dedicated AP Java JVM Java Client
NSQ (lookupd) Dedicated AP Go   Client Binding
Serf Dedicated AP Go   Local CLI
Spotify (DNS) Dedicated AP N/A Bind DNS Library
SkyDNS Dedicated Mixed (2) Go   HTTP/DNS Library

(1) If using the consistent parameter, inconsistent reads are possible

(2) If using a caching DNS client in front of SkyDNS, reads could be inconsistent

 

http://jasonwilder.com/blog/2014/02/04/service-discovery-in-the-cloud/

 

时间: 2024-06-23 22:52:36

Open-Source Service Discovery的相关文章

DockOne微信分享(一二九):聊聊Service Mesh:linkerd

本文讲的是DockOne微信分享(一二九):聊聊Service Mesh:linkerd[编者的话]随着企业逐渐将传统的单体应用向微服务或云原生应用的转变,虽然微服务或者云原生应用能给企业带来更多的好处,但也会带来一些具有挑战的问题,如怎么管理从单体应用转向微服务所带来的服务间通讯的复杂性,怎么实现微服务间安全,高效,可靠的访问,如何满足多语言多环境的透明通讯,服务发现.熔断,动态流量迁移,金丝雀部署,跨数据中心访问等等.本次分享给大家引入一新概念服务网格(Service Mesh)以及介绍业界

Web Service随笔1

Web Service随笔.1 为什么出现Web Service?现在Internet的发展十分迅速,它从前的框架是"人来获取网络上的资源,比如:程序.文档等".也就是说,现在的Web是以人为中心的,人来发送各种请求.而它的发展趋势将是主体从"人"转向"程序",比如媒体播放器.浏览器等,也就是说"以应用程序中心"的Web.其实,在Web Service出现之前,人们就已经在做这方面的事情了,例如Java的Servlet和CGI

Service 之间如何通信?- 每天5分钟玩转 Docker 容器技术(101)

微服务架构的应用由若干 service 组成.比如有运行 httpd 的 web 前端,有提供缓存的 memcached,有存放数据的 mysql,每一层都是 swarm 的一个 service,每个 service 运行了若干容器.在这样的架构中,service 之间是必然要通信的. 服务发现 一种实现方法是将所有 service 都 publish 出去,然后通过 routing mesh 访问.但明显的缺点是把 memcached 和 mysql 也暴露到外网,增加了安全隐患. 如果不 p

Docker网络和服务发现

本文讲的是Docker网络和服务发现[编者的话] 本文是<Docker网络和服务发现>一书的全文,作者是Michael Hausenblas.本文介绍了Docker世界中的网络和服务发现的工作原理,并提供了一系列解决方案. 前言 当你开始使用Docker构建应用的时候,对于Docker的能力和它带来的机会,你会感到很兴奋.它可以同时在开发环境和生产环境中运行,只需要将一切打包进一个Docker镜像中,然后通过Docker Hub分发镜像,这是很直接了当的.你会发现以下过程很令人满意:你可以快速

webrtc教程

cdsn博客不支持word文件,所以这里显示不完全.可到本人资源中下载word文档: v0.3:http://download.csdn.net/detail/kl222/6961491 v0.1:http://download.csdn.net/detail/kl222/6677635  下载完后评论,可以返还你的积分.此文档还在完善中,欢迎大家交流,共同完善.        Webrtc  教程     版本0.3(2014年2月) 康林 (16614119@qq.com)   本文博客地址

【Android平台】 Alljoyn学习笔记四 Android Core API参考

CORE API GUIDE - ANDROID Prerequisites Install dependencies for the Windows platform, or for the Linux platform. A device running Android OS version 2.2 (Froyo) or greater and running a chip based on the ARM 5 (or greater) instruction set. Importing

Docker: the Linux container engine

原文地址:https://github.com/dotcloud/docker/ Docker教程中文版本:http://www.widuu.com/docker/ Docker is an open source project to pack, ship and run any application as a lightweight container Docker containers are both hardware-agnostic and platform-agnostic. T

Java微服务开发指南 -- 集群管理、失败转移和负载均衡的实践

集群管理.失败转移和负载均衡的实践     在前一章节中,我们快速的介绍了集群管理.Linux容器,接下来让我们使用这些技术来解决微服务的伸缩性问题.作为参考,我们使用的微服务工程来自于第二.第三和第四章节(Spring Boot.Dropwizard和WildFly Swarm)中的内容,接下来的步骤都适合上述三款框架. 开始     我们需要将微服务打包成为Docker镜像,最终将其部署到Kubernetes,首先进入到项目工程hola-springboot,然后启动jboss-forge,

#每日播报# 8月6日 Github 热门项目汇总

这是一份来自美国的网友利用 Github 的 API 获取的每日 Github 上热门项目列表,该网友每天都会发布更新列表.下面是 8月6日的热门项目列表: **objective-c ** ChristianKienle/Core-Data-Editor: Core Data Editor lets you easily view, edit and analyze applications' data. Core Data Editor is compatible with Mac and