• 5 Posts
  • 207 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • I’m not completely sure which classes you’re talking about - but it sounds like the Business Process Layer

    I would call them “services” but I’m looking for a less overloaded term. Maybe capabilities? Controllers?

    “Controllers” (in dotnet at least) is usually reserved for the class that initially intakes the http request after middleware (auth, modelbinding etc)

    It’s probably easier with a concrete example, so lets say the action is “Create User”

    It depends on the rest of your architecture, but I usually start with a UserController - that takes all user related requests.

    To make sure the Controller doesn’t get super big with logic, it sends it though mediatr to a CreateUserCommandHandler

    But it’s a big vague which parts you’re asking about…

    “there is a class of … classes/modules that does the needful.”.

    Everything else you’ve described

    “API resources, queue workers, repositories, clients” and serializers

    Is “cross-cutting”, “Data Access Layer”, and “Service Agent Layer” maybe a bit “Anti-corruption Layer” - but there’s a lot of other things in between that “do the needful”


  • Well to be clear, this was not supposed to be a jab at gitflow, or me complaining specifically about gitflow. I merely used “gitflow” as an example of a set of conventions and standardizations that comes nicely packaged as one big set of conventions.

    But there’s nothing wrong with gitflow. I was just saying - it are not set in stone rules you must follow religiously. If you’re using it and it seems more practical to adapt the flow for your own use-case, don’t worry it’d be considered wrong to not stick strictly to it


  • I think a common misconception is that there’s a “right way to do git” - for example: “we must use Gitflow, that’s the way to do it”.

    There are no strict rules for how you should use git, it’s just a tool, with some guidelines what would probably work best in certain scenarios. And it’s fine diverge from those guidelines, add or remove some extra steps depending on what kinda project or team-structure you’re working in.

    If you’re new to Git, you probably shouldn’t just lookup Gitflow, structure your branches like that, and stick strictly to it. It’s gonna be a bit of trial-and-error and altering the flow to create a setup that works best



  • It’s not a big red flag, but it indicates that the product is not fully open source. You can get the full community edition from Github, but for the Self-hosted Enterprise version you have to contact sales.

    So all the Enterprise features are most likely closed source, and when you buy/license it, you’ll just get the compiled version. And since their Cloud hosting model has a “Per 1,000 sessions/mo” model, their Enterprise self hosted model might have that as well. So it’ll have some kinda DRM/License managing, and maybe a “call home” to check your license or usage every once in a while






  • Sure, but testing usually purely relies whether your assumptions are right or not - whether you do it automatically or manually.

    Like if you’re manually testing a login form for example, and you assume that you’ve filled in the correct credentials, but you didn’t and the form still lets you continue, you’ve failed the testing because your assumption is wrong.

    Like even if the specs are wrong, and you make a test for it, lets say in a calculator Assert(Calculate(2+2).Should().Equal(5) - if this is your assumption based on the specs or something, you can start up the calculator, manually click through the UI of the calculator, code something that returns 5, and deliver it.

    Then once someone corrects you, you have to start the whole thing over, open up the calculator, click through the UI, do the input, now it’s 4, yay!

    If you had just written a test - even relying on a spec that was wrong, it’s still very easy to change the test and fix the assumption.

    Also, lets say next sprint you’ll have to build a deduct function in the calculator, which broke the + operation. Now you have to re-test all operations manually to check you didn’t break anything else. If there were unittests with like 100 different operations, you just run them all, see they’re all still good, and you’re done


  • He’s already pointing out the problems himself:

    The difference is that Spotify is a for-profit corporation. And they have to distribute profits to their stockholders before they pay the musicians. And as a result, the musicians complain that they’re not getting very much at all.

    Yea, so at Spotify the profits are distributed “equally” - meaning Taylor Swift with 1 billion listens per month gets 99.9999% of the profits, [[Obscure metal band]] with 100 listens gets $0.001. However, if I only listened to [[Obscure metal band]] and nothing else, shouldn’t my entire $5.99/month go to [[Obscure metal band]]? And not be pooled with stuff I didn’t listen to?

    How would this work with a “Post-Open software administrative organization”? Ubuntu has 1 billion installs, my [[Obscure open source library]] is used by a couple of companies, and it’s the only “Post-Open software” that those companies use - Do I get that 1 percent of their revenue? Or does administrative organization siphon it away, keep 0.1%, and send the other 0.9% to the top 10 “Post-Open Projects”…?

    Companies would have to publish which “Post-Open software” software they’re using, and to what extend. For example, if Ubuntu would be Post-Open-software, it uses loads of inner projects and libraries, which again use more and more libraries, some might being Post-Open software. You’d have to create a whole financial dependency tree per company to determine how to distribute their revenue fairly


  • I manually redraw my service architecture because I can create higher quality documentation than when trying to auto-generate it.

    But you can get a baseline depending on which Cloud you use. For example, in AWS you can use workload discovery - that generates a system overview.

    Bonus (optional) question: Is there a way to handle schema updates? For example generate code from the documentation that triggers a CI build in affected repos to ensure it still works with the updates.

    Yes, for example, if your build server exposes the API with an OpenAPI scheme, you can use the build server to generate a client library like a nuget or npn.

    Then in the API consumer you can add a build step that checks if there are new version of the client library. Or setup dependabot that creates PRs to update those dependencies