This is the blog section. It has two categories: News and Releases.
Files in these directories will be listed in reverse chronological order.
This is the multi-page printable view of this section. Click here to print.
This is the blog section. It has two categories: News and Releases.
Files in these directories will be listed in reverse chronological order.
Milkman now supports oAuth2 keys and more general provides a (naturally extensible) way to organise secret credentials.
Putting credentials into environment variables works but had its downsides. When syncing your workspace you might not want to store secrets in e.g. github. Therefore the concept of keys for introduced.
Those keys live next to an environment and are not synched or exported, so they won’t leave your milkman.
Keys are basically objects that have a string value, but it is up to the plug-in to come up with that value. A simple example is a base64 value of a given string.
Another example is oAuth2 support. The actual value of the key will be retrieved and cached from an authorization server. Refresh of a token will happen implicitly, so once you’ve setup your oAuth data, you can use that key without thinking about it again.
I have no plans yet to add support for other kind of keys but some ideas. I could think of supporting general jwt minting or integration with an external credential storage such as vault, let’s see…
This post should be a compilation of more complicated use-cases that fully uses the possibilities provided by Milkman. It should be a collection of tips and tricks that are not immediately obvious, but can be very useful in certain situations.
As Milkman can interact with Sql and NoSql databases, it can also interact with AWS services. This is done by using the AWS SDK for Java. To use the SDK, you need to add the SDK to your project. See following sections for examples.
AWS RDS is just a SQL database in the cloud. To connect to it, you need to add the AWS SDK to your installation. As Milkman is using JDBC for sql connections, you can use AWS Advanced JDBC Wrapper to your /plugin
directory, additionally to your usual JDBC driver (like postgres or mysql jdbc conncetor) and use it like this:
jdbc:aws-wrapper:postgresql://database-pg-name.cluster-XYZ.us-east-2.rds.amazonaws.com:5432/connectionSample
Another option to connect to (access-restricted) RDS instances is to use a SSH tunnel. See SSL Jump Hosts for more information.
See AWS SSO for how to authenticate with AWS via SSO.
AWS DynamoDB is a NoSQL database in the cloud. To connect to it, you need to add the AWS SDK to your installation. As Milkman is using JNoSql for NoSQL connections, you can use the jnosql-dynamodb-document Driver to your /plugin
directory and use it like this:
#Parameters
jnosql.document.provider: com.github.warmuuh.jnosql.dynamodb.DynamoDBDocumentConfiguration
jnosql.dynamodb.region: eu-central-1
jnosql.dynamodb.profile: my_profile
jnosql.dynamodb.prefix: some-prefix-
jnosql.dynamodb.selectmode: scan
See AWS SSO for how to authenticate with AWS via SSO.
To support authentification via AWS SSO, simply drop the SSO Artifact into your /plugin
directory. Then you can use the following code to authenticate:
# in terminal:
aws sso login --profile my_profile
All aws-related functionality in milkman will pick up the active login and use it for opening connections.
For connecting to databases that are not directly accessible from your machine, you can use a SSH tunnel. Usually, you need to manually open a tunnel via ssh
and use the tunnel as a proxy (socks-proxy) for your connection. Milkman can do this in a best-effort way, as some drivers are natively supporting this. It is currently supported for:
socksProxyPort
, socksProxyHost
and socksProxyRemoteDns
to the connection stringsocksProxyPort
and socksProxyPort
are set. this is respected by some drivers and http clientsIf you use jdbc, an easier method is to jdbc-sshj which opens a tunnel when needed. The JDBC-string then looks a bit more complicated:
# connecting to an aws rds instance via a jump host
# {{host}} and {{port}} are placeholders that are replaced by the jdbc-sshj driver
jdbc:sshj-native://<user>@<jump-host>?remote=<aws-endpoint>:5432;;;jdbc:postgresql://{{host}}:{{port}}/<database>?user=<user>&password=<password>
With rising security requirements, more and more services require client certificates for authentication. Milkman can handle this by adding the client certificate to the request. To do this, you need to add the client certificate to the list of known certificates in Milkman preferences. After that, you can select the certificate in the request settings (cog-wheel next to the http request method).
Test
requests can be used to create a sequence of requests that are executed in order. This can be used to handle a more complex scenario with multiple dependent requests. To use data of one request in the next one, you can use scripting
component.
These requests are running in isolation, so any change to environment variables (if not specified differently in the properties of the Test) are not propagated to the current active environment.
Request 1: e.g. a request that creates a user and returns it in response body
var user = mm.response.body.body;
mm.setEnvironmentVariable("user", user)
console.log("user: " + user)
Request 2: use the user to login and get a token, using the user from the previous request
//request body
grant_type=password
&client_id=...
&client_secret=...
&scope=openid
&username={{js:escape("{{user}}")}}
&password=password
// script to extract the token:
var body = JSON.parse(mm.response.body.body)
milkman.setEnvironmentVariable("user-token", body.access_token)
milkman.setEnvironmentVariable("user-rt", body.refresh_token)
Because you can load external javascript files that can then be used in the scripting part, the scripting component is way more powerful than without using that capability.
// preload script: https://cdnjs.cloudflare.com/ajax/libs/chai/4.3.4/chai.min.js
// then you can use chai.js in your scripts:
chai.should();
var body = JSON.parse(milkman.response.body.body)
body.id.should.equal(mm.getEnvironmentVariable("userId"));
var headers = milkman.response.headers.entries;
var tokenHeader = null;
for (var i = 0; i < headers.length; i++) {
if (headers[i].name === 'x-token')
tokenHeader = headers[i];
}
should.not.equal(tokenHeader, null);
tokenHeader.value.should.match(/^0.*$/) //is it according to new format?
Sometimes, you might want to use content from external files in your requests, such as a csv-file loaded from external. Out-of-the-box, Milkman does not support this, but you can use the following workaround:
//in an external preload script, e.g. "~/.milkman-scripts/loadFile.js"
function loadFile(path) {
var pathObj = java.nio.file.Paths.get(path)
var bytesObj = java.nio.file.Files.readAllBytes(pathObj);
var bytes = Java.from(bytesObj) // converting to JavaScript
return String.fromCharCode.apply(null, bytes)
}
//then using that as preload script in milkman: file:///Users/.../.milkman-scripts/loadfile.js
//and for example in the request body, use it like this:
{
"csvFile": "{{js:base64(loadFile('/path/to/test.csv'))}}"
}
Sometimes, scripting can be used to add debugging information to your requests, for example, you can print the information of a JWT token to console:
// using pre-load script: https://cdnjs.cloudflare.com/ajax/libs/crypto-js/4.0.0/crypto-js.min.js
// using custom pre-load script:
function parseJwt (token) {
try {
var words = CryptoJS.enc.Base64.parse(token.split('.')[1]);
var textString = CryptoJS.enc.Utf8.stringify(words);
return JSON.parse(textString);
} catch (e) {
return null;
}
};
// in scripting of the request:
var body = JSON.parse(mm.response.body.body)
console.log(JSON.stringify(parseJwt(body.access_token), null, 2))
In lot of companies, many services exist and sometimes, a directory with all service specifications exist. Milkman can make use of these directories (currently, only apis.guru format is supported, open tickets to request more formats) to import service definitions. This allows you to import your company’s services and use them in your requests with just a few clicks.
If you want to export requests in a specific format, you can create custom templates. These templates are written in mustache with whitespace control extensions. Some examples of http exports:
Useful because some alpine docker images dont have curl installed but busybox wget
wget -O- {{#headers.entries-}}{{#enabled-}}
--header="{{&name}}: {{&value}}"
{{_/enabled}}{{/headers.entries_}}
{{&url}}
If you program and need to implement the request in java / spring
RestTemplate restTemplate = new RestTemplate();
Headers headers = new Headers();
{{#headers.entries}}{{#enabled-}}
headers.add("{{&name}}", "{{&value}}");
{{/enabled}}{{/headers.entries-}}
{{#body.body-}}
String body = "{{&.}}";
{{-/body.body}}
HttpEntity<String> request = new HttpEntity<>(
{{-#body.body-}}
body,
{{-/body.body-}}
headers);
ResponseEntity<String> response = restTemplate
.exchange("{{url}}", HttpMethod.{{httpMethod}}, request, String.class);
String responseBody = response.getBody();
Have you ever started an open-source project, dived right into the code, discovered new API features that you loved, fiddled around with the build process, and then take a little break and never come back to it?
If that sounds familiar, this article is for you.
In April, I set out to write Milkman, an extensible postman replacement that allows me to finally have the features always missing in Postman. In general, Postman and Milkman are applications that help test web APIs by sending/organizing requests. Milkman is a JavaFX application without many dependencies, having a smaller footprint, for example, than electron apps, and is optimized for fast start-up times. The codebase now is about 12.5K code lines big, and most of it was written in a time span of ~ 20 days in April, normally working around 4–5 hours per day. Since then, I’ve spent about one day a week, 2–4 hours, fixing reported issues and/or adding new features.
Now, writing a large application like this without falling into the usual traps of losing motivation requires some best practices that I would like to discuss today. I am not writing about coding style, architectures that scale, or how to allow your initial prototype to grow — these are topics for another day.
The biggest problem with open-source projects is always motivation — how to keep up the motivation and how to not lose interest and then stop? All of the practices mentioned below will help you maintain motivation and produce a mature (and hopefully open-source) project.
Being productive helps you spend time on what you actually want to work on, as well as spend time on what you find fun instead of producing churn. It also helps you to create a lot of features within a small amount of time.
This is along the lines of “Don’t Break The Chain” and might be a well-known motivational technique. As I had some spare time in April, I could spend each day working on Milkman. Later on, I could only spend one day a week developing, but I could still achieve a lot in that time.
It was important to me to not skip a day, and even if it was only 30–60 minutes, simply adding some sentences to the readme was better than nothing. Sometimes, I was not in the mood to work much, but as soon as I started, I wanted to finish that one feature, and by the end, I would spend 2–3 hours instead.
As I only have that one day a week, I want to spend it as efficiently as possible. Therefore, I am planning the things I want to do on that day earlier, maybe taking some notes on the train ride. That way, when I fire up my IDE on that day, I am already ready to go and don’t have to think about what to do.
Also, I always plan my days by features. I wanted to finish this or that feature on that day. This also means that I had to estimate how long a feature would take and sometimes split it into smaller ones.
The third thing I planned for those days was the prioritization of the list of features that I wanted to implement. Each day, there was a must-have feature set and a nice-to-have feature set. After finishing the must-haves, I would normally work ~3 hours anyway. I could then decide to go on with the nice-to-have features or just stop there without any bad feelings. Most often, I picked one or two nice-to-haves as well.
I wanted to be able to spit out new releases/features in the blink of an eye. I could spend some time streamlining the build process so that it is was not in my way. If I always spent time doing tedious steps to release Milkman, this would be a barrier to even get started adding a new feature. Right now, I only have to type ‘mvn deploy’ and the whole application gets packaged (containing a bundled JRE, an executable for Windows, etc.), and uploaded to the GitHub release page automatically. I can go away and get a coffee meanwhile.
Also, I set up a CI that provides access to nightly builds so people can even get development versions of the product.
Although it is just one command, it still takes time, and one day, I may be able to further optimize it and put all handling of this into GitHub Actions so that I can trigger it once and continue working on something else.
This one might be a bit controversial, and I would never advise it (I always write a lot of tests normally), but I have to be honest — this was one of the drivers for being productive. I did not test any obvious functionality. It is a desktop application, and if something does not work, I will see it while using it. I did set up some tests, but they mainly test some of the more complicated stuff (postman collection import or garbage collector quirks in JavaFx).
This one might bite me later though. If I want to change things, I might accidentally break something without realizing it. I am still not sure if the trade-off is worth it; let’s see what the future brings. Up until now, I did not run into many issues because of this.
Keeping my focus on the goal I wanted to achieve helped me stay effective and develop only what helps solving the problem. Because of the nature of applications, there is a lot of work to be done to finally see some minimum viable product (MVP). Iterating on this is then easier and you can see your progress. To optimize that, the MVP should be as small as possible, so you can see results very early on and get motivated by the outcome and iterations (return of invest, so to say).
When building the MVP for Milkman, I was solely concentrating on the clear thread. No design, no optimization, nothing that diverted me from contributing to the happy case use case of my application. The result was even less than an MVP but still a working application with persistence, a “usable” UI and something that executed requests.
One thing that was really important was to always reflect on the work that I was currently working on. Is it contributing to the feature that I am currently working on? Yes? Good. No? Either stop it or at least timebox it (to 30 minutes/1 hour. This needs discipline) and drop it rigorously if it did not work out. I also needed to accept loose ends and non-perfect code. I needed to implement some hacks here and there to get things done. At some point later, when those hacks proved to be an obstacle, they got removed. I could obviously spend all day refactoring the whole codebase, but this does not contribute to my current goal.
Normally, I start a hobby project with the goal of solving a problem but also learning something new (otherwise, it is called ‘work’ :D). Now, for this project, I avoided any experiment. I wanted to learn Kotlin or Go, maybe look into graph databases or marry Spring and JavaFx for a nicer development experience. No. The whole codebase of Milkman is boring plain old Java, nothing special. But that is what I am most productive in. I did not want to spend time finding solutions to problems introduced by my lack of knowledge about a different language or tool.
Writing a replacement for Postman gives me a pretty good plan of what features I wanted to have but also what features I don’t want to implement. This helped me with defining the next features that I wanted to work on and leave out unnecessary stuff.
Additionally, I defined a set of general requirements before starting development, such as extensibility or fast startup time. All the features/code written was evaluated if it fits those requirements. I did this, again, to always produce stuff that contributes to some of the goals of Milkman.
This does not mean to write code without bugs (especially, if there are not many tests), but I don’t accept discovered bugs to linger in my code (maybe with “FIXME” close by or a GitHub issue). If I would, they would pile up and rot my development experience. Nobody likes fixing bugs, but that’s why they are always in the must-have feature set planned for a day (see “Planning That One Day”).
Finally, I would like to shortly talk about social media. Getting feedback is great, so you might want to use Twitter or even Twitch for live-coding to get feedback and also a pat on the back for your hard work, which is always a great feeling (GitHub stars, anyone? :D).
I chose not to because I think that this might lead to the wrong kind of motivation. I develop Milkman to solve my own needs. I use it day-by-day and fix whatever issue I discover. Having people talk about it is a nice side-effect and I love for people to use Milkman, but that is not the reason for me to go on. It has to be a product that YOU use, but I guess, that is clear anyway.
Milkman dynamically (re-)generated AppCds cache to improve startup time of a JavaFx application.This blog post goes into how to apply AppCds on a desktop application with real-world requirements like shipping, updates and plugins.
Currently, I am developing Milkman, an alternative to Postman that is extensible and faster (as it is not using electron, but sigh the more lightweight JavaFx stack.
One goal of this application is to be a “jump-in and do something, then leave” type of application. I don’t want to wait 10 seconds until it is ready to be used before doing a small request and closing it again.
Although Milkman starts already quite fast (compared to e.g. electron apps), it can be optimized further. One option is to use GraalVm to compile to native. Although this would be possible (due to usage of a compile-time dependency injection framework, hardwire), the project is not yet in a stage where it can be setup easily, not to mention windows support. Also JavaFx is on its way to supporting GraalVm.
Another option, that might be lesser known, is to use Class Data Sharing (CDS) or more specifically AppCds (which extends CDS on application classes). AppCds basically caches all information about loaded classes in a file that can be mapped to memory instead of rescanning the whole classpath on every start. This was a feature in OracleJdk but is available in OpenJdk as well starting with jdk 10. As Milkman comes with a custom JRE, I can rely on having the right JDK for profiting from AppCds.
There is a pretty good article about AppCds at codeFx. They do mention that it is hard to apply AppCds on JavaFx applications as javafx is not packaged anymore with the JDKs. For that reason, I moved to use Liberica as a JDK, which is basically openJdk + JavaFx (and some other features, I guess). The mentioned blog already contains pretty good details on how to initialize AppCds, so I wont repeat those details here. The final result will be, in the case of Milkman, a ~80Mb file. The startup on my machine gets roughly 0.8 seconds faster (I did no real benchmark), which is quite an improvement.
This blog should be about how to apply AppCds in a final product where you can’t ask users to run a specific sequence of commands or run a script. Also the nature of Milkman (no installs but overwriting application for updating it, drag&drop plugins) leads to changes in both the classpath and the actuall classes (due to updates), which invalidates any generated AppCds cache. Also I cannot precompile the AppCds cache and ship it, as this will nearly double the size of the shipped artifact.
Having this situation leads us to a list of requirements:
For creating AppCds Cache, you need to have a list of classes that should be part of the AppCds cache. As this wont change too much, it can already be precompiled and put into the shipped artifact. It might not contain classes used in “unknown” plugins, so this might be something to compile dynamically as well. The appCds cache can be generated by executing the according java-command, which is done in Milkman on a separate thread during the startup, if necessary.
If something is off with the AppCds cache, -Xshare:auto flag will lead to the cache being ignored and the application being started in a normal way.
Now, this is the hard part. How to identify if the current application got loaded via AppCds or if the appCds is actually invalid and was not used during startup. I could only find one reliable way to see, if a Cache got used: try using it again in a sub-process, but with -Xshare:on, which leads to the application being terminated on startup, if the AppCds cache could not be used. Besides other things (like comparing, if the classpath that was used for creating the cache changed), this is exactly what is used in Milkman (see code).
If it shows that the cache is actually invalid, the file has to be replaced. This sounds easier than it is, as the file might be locked (in case where classpath changed, the file is not locked, but in case of class-changes, it is). Renaming still works (on Windows), so it will be renamed and deleted on next run.