Successfully evolving micro-services
using consumer-driven contract testing (Part 1)
Migrating from monolithic to micro-services architectures, most people, at first, attempt to solve this problem using a centralized approach. They focus on distributed integration tests using some kind of pre-production environment including all the services needed to run those tests. However, with an increasing number of services, the complexity of managing such a testing environment grows exponentially, mainly due to the following reasons:
Inter-service communication relevant for testing explodes,
since each new service may communicate with all other services, potentially.
Test data necessary to run the tests potentially needs to cover all aspects of these tests, augmented
by the fact that each service has an independent data store with potentially redundant data as well.
Thus, test data may become stale more often due to missing roll-back mechanisms,
as compared to local integration tests using transactional rollback, for instance.
Finally, all services participating in such a test must be properly coordinated with respect to their integration and delivery pipeline, which results in stronger coupling of the services and their development teams.
PRESERVING LOOSE COUPLING
WHEN TESTING INTEGRATED SERVICES IN DISTRIBUTED ENVIRONMENTS
In any distributed system, the communication between two services can be seen as a consumer-provider relationship. The consumer issues some kind of request against the provider, whereas the provider may answer with a suitable response.
Whether these requests and responses are synchronous (or even HTTP), doesn’t really matter. With asynchronous messaging, for instance, the provider may send a message to any number of subscribed consumers, while none of them ever requested it in the first place. Still, both communication end-points share a so-called communication contract, even if it’s made up of undocumented expectations. This contract itself is not solely a message schema or interface definition language (IDL) description, such as WSDL. It can rather be considered an aggregation of multiple dimensions of both consumer and provider expectations, including interface, business, conversational, quality of service and many more constraints. Simply because they haven’t been written down, doesn’t mean they do not exist.
By writing down the relevant parts of the contract in a suitable way, we can test both consuming and providing services more independently, as shown in the following picture.
As can be seen, loose coupling is preserved by negotiating the necessary contract changes first. The actual implementation and testing of these changes, if necessary at all, may take place independently of each other for both consumers and providers. With this asynchronous integration testing approach the provider uses the contract to test its service implementation, while all consumers use it for stubbing their communication with the provider. Finally, the provider has to be deployed into production before any of its consumers, since they rely on the new contract after all. This is a necessary synchronization point, but it can be automated as well. I will explain how this process can be implemented using tools such as → Spring Cloud Contract Verifier or → Pact in one of the next blog posts in this series. These tools primarily support the generation of contract based stubs and test cases.
The observant reader may have noticed that the contract refinement process is not triggered by the provider itself. Rather, consumers should drive the contract by requesting changes, then to be implemented by the provider. In practice, this is often disregarded in favor of simply publishing provider-driven contracts derived from API documentation tools like Swagger. However, this leads to contracts revealing a white-boxed view on the provider implementation, which causes stronger coupling between services. Moreover, a provider-driven API does not reflect the needs of its consumers and thus results in poor data representations and inefficient service calls.
IMPROVING API QUALITY USING CONSUMER-DRIVEN CONTRACTS
In order to implement a consumer-driven contract testing (CDCT) strategy, it is necessary to change the way the contracts are managed in the first place. While the provider is finally responsible for its API contract, the consumers are actively evolving it by submitting proposed changes, e.g. using regular Git pull requests. The provider then consolidates these changes into a new version of its API, potentially collaborating with other consumers, if necessary. Depending on how much the consolidation is assumed to affect the proposal, the consumer itself may start implementing its changes right away using the provider stubs. Appropriate tools may be used to generate them automatically based on the new contract.
An important aspect of this process is that contracts are not merely an interface definition anymore, as compared to WSDL for instance. In order to generate the required stubs and test cases, contracts are expressed using suitable request/response examples. This can be compared to acceptance test driven (ATDD) approaches, such as specification by example, with a strong focus on inter-service communication. While the actual notation of these examples heavily depends on the CDCT tooling in use, they intend to improve the overall quality of the contract by expressing consumer expectations using example requests and responses. This way, the provider gains insights on which parts of its API are actually used and how. He is now able to:
split up existing APIs given diverging consumer demands
merge different APIs into a consolidated one
remove APIs no longer in use by the consumer
Before lifting the benefits of consumer-driven contract testing, you should keep the following in mind:
Contract based testing is a divide and conquer approach addressing the scalability problem of running full-grown integration tests in an ever-growing distributed system. As such, it favours isolated testing for both consumers and producers, focusing on the API contract that they share. This, however, implies that all teams contributing to the distributed system adhere to this approach, in order to avoid critical failures in production.
Using CDCT one can steadily improve the provider APIs, because the API change is driven by the consumer. This, however, does not imply that a consumer fully owns the provider API. Rather, each consumer maintains its own view on the API in the form of example-driven contracts. This does not play well, though, with fully generated API client proxies, e.g. using Swagger-generated REST clients for all consumers. It is recommended to use consumer-specific clients adhering to their contract requirements, e.g. using consumer specific Feign client classes.
CDCT is only a supplementary element to the overall testing strategy. Relying on isolated CDCT tests without having established a suitable unit test suite, for instance, will inevitably cause failures in production. Furthermore, CDCT is no substitute for functional testing, though it might be tempting to specify business acceptance criteria within contracts. CDCT should focus on testing contracts in a flexible manner, without over-specifying details of the underlying application.
Why have you decided to implement consumer-driven contract testing and what are you experiences?