The Polylith team has just released a big update to the Polylith architecture. Now it's even simpler to grow software out of composable LEGO-like building blocks: http://polylith.gitbook.io
so I have a question about the Polylith style. suppose I've got a collection of microservices that expect to communicate over gRPC. if ported into Polylith, would that be... N projects each of which deploys like a microservice, and where each microservice has a corresponding component that knows how to call it? so a consumer of service A would include the client component for service A, which hides the details of protobufs and the network call?
(and if so, how does that work in development? am I actually running all the gRPC servers on different ports and its still making real network calls?)
Hi Braden, I’ll jump in and try to answer this, though you may be wise to wait for a more detailed/accurate answer from Joakim or Furkan.
In Polylith, you’d want to put your gRPC end-points into separate bases
- one for each microservice (`project`) in your system. Then we’d advise dividing up the business logic within each microservice across multiple components
, for code reuse and modularity. To get the “monolithic” development experience (single REPL, refactoring, navigation, etc.) in your development project
you could create two versions of each component
that actually makes a gRPC call to another of your microservices: a “remote” one that makes the real gRPC call (which you use in your production artefacts) and a “local” version which calls the component
function directly (which you use in your development project
).
So in your development project
you’d have all the components
talking directly to each other via function calls, but your production projects
use the “remote” versions to make the real gRPC calls. I’m not sure if how I explained it is very understandable, but hopefully you get the gist of it.