View on GitHub

ScaleCube Reactive Microservices

The Future is reactive - The Future is right here!

Consuming services

We’ve seen how to define service interfaces and how to implement them, now we need to consume them. The service interface contains everything ScaleCube-services needs to know about how to invoke a service.

A service implementations may be located anywhere at the cluster and or may appear with number of instances. ScaleCube service .proxy() solve this for you. no matter where and how much service instances you have it gives you the full control to consume them.

A service consumer is a member of the service cluster sharing same Cluster Membership / Failure Detection / Gossip discovery group. this can be achieved by joining one of the known cluster members AKA .seeds() nodes.

   // Create microservice consumer
  Microservices consumer = Microservices.builder()
    .seeds(providerAddress) // the address of any node at the cluster.

Using the service interface class we can request from the consumer a .proxy() for a given .api(GreetingService.class). the create() will build for us a proxy with relevant descriptors to address the specific service in the cluster.

  // Get a proxy to the service API.
  GreetingService greetingService = consumer.proxy()
    .api(GreetingService.class) // the interface of the service.

Using the service proxy to call a service is as trivial as a simple method call. the service proxy will locate the service instance in the cluster and will route to them the requests. Services supports java CompleteableFuture for a async request response and rx.Observable for a reactive streams pattern.

  // Call service and when complete print the greeting.
  greetingService.greeting(new GreetingRequest("Joe"))
    .whenComplete((result, error) -> { // handle the async response.

  // Subscribe to a local or remote service using rx reactive streams.
  greetingService.greetingStream(new GreetingRequest("Joe"))
    .subscribe(onNext -> {

By default the scale-cube services provides a round-robin service instance selection. that means for each service request the proxy will use the currently available service endpoints and will invoke one message to one service instance at a time so its balanced across all currently-live-service-instances. using the .router(...) its also possible to control the endpoint selection logic.

Service Proxy Options:
.router(...) option you can choose from available selection logic or provide a custom one.
.timeout(Duration) option you can specify a timeout for waiting for service response the default is 30 secounds.

  // Get a proxy to the service API.
  GreetingService greetingService = consumer.proxy()
    .api(GreetingService.class)      // the interface of the service.
    .timeout(Duration.ofSeconds(1))   // timeout limit for expected response
    .router(CanaryTestingRouter.class) // service instance selector