In a previous post, I mentioned I worked exclusively in fintech for the past couple of years. During this time, I was picked for a greenfield project focused on performance testing and metrics gathering. It was a very interesting experience because development from a research perspective is an odd mix. I felt like it was unit testing an MVP. Poking an prodding a product, collecting metrics, improving test, repeat; I became a scientist. But before earning my labcoat, I had to figure out what performance was.
What is performance?
At the time of this project, I had been reading Designing Data-Intensive Applications by Martin Kleppman. The first chapter included a comprehensive overview of what performance is. For this article, we will stick to the metrics Martin applies to online systems. Performance is measured in the response time of the service (usually measured in milliseconds). A response time is the duration between sending a request to a client and receiving a response. A one time request provides an accurate response time, but is an unrealistic metric to judge performance. Performance metrics are gathered based on loading the system over time, with multiple users, and averaging the response time; this measurement through distribution is our metric for performance.
So what are performance driven services?
I’ve been working in microservices with different development methodology for some time. Test driven development (TDD) and Behavior Driven Development (BDD) are the king methodologies employed by teams. However, performance standards are usually segregated from development. What i’m proposing isn’t new, but an altering of priorities during the CI/CD process. Adding performance parameters as a deployment constraint has the potential to catch low performing services early and address resource waist (usually an issue in the JVM space). I don’t see this idea catching a huge wave of support since the development methodologies that are most popular require a huge commitment from developers. But for any experimenters out there, please don’t hesitate to shoot me an email and let me know how it goes.
Show me the code…and what the hell is Dart?
Dart is my new favorite language to build tools with and a language popularized by the hybrid mobile framework Flutter. I like some of the liberties the language takes with feature implementation and the fact I can build hybrid mobile apps with what I learn is a plus. Two features that will aid in building our performance tool are Streams and Futures.
Futures
Futures are very similar to async/await implementations in Javascript. It is a feature designed for handling asynchronous code. Futures return values in two ways:
- A future object with operations occurring in the background. It can only be fetched/operated on using an appending then method.
Future<int> returnADigit() {
42;
}
someFunctionNotRequiringAsync() {
returnADigit().then((value) {
print("THIS IMPLEMENTATION IS MORE LIKE PROMISES");
});
}
- An object from the future. But this requires some preconditions. The method that calls the future method, has to marked async. Also the variable holding the future’s value, has to put the await keyword before calling the future.
Future<int> returnADigit() {
42;
}
someFunctionRequiringAsync() async {
final int digit = await returnADigit();
print("THIS IMPLEMENTATION IS MORE LIKE ASYNC/AWAIT");
}
Streams
Streams are a feature of Dart that inspired the article. I find them similar to Observables, specifically the RxJs implementation. A stream is a feature where you listen for value updates. Similar to observables, what you listen for is categorized and specific code is ran based on the category.
import 'dart:async';
import 'dart:math';
class SampleStream {
SampleStream() {
//THIS METHOD RUNS EVERY SECOND
Timer.periodic(Duration(seconds: 1), (t) {
//THIS ADDS THE RANDOM NUMBER TO THE STREAM
_controller.sink.add(Random().nextInt(100));
});
}
closeStream() {
_controller.close();
}
final _controller = StreamController<int>();
Stream<int> get stream => _controller.stream;
}
So there are many things to cover here, but I’ll make it simple:
- The controller handles what is passed into the stream.
- The get stream function allows us access to our class stream, by way of the controller of course.
- The Timer.Period method in the constructor runs itself according to our preset duration/timer. In this case, every second.
- Lastly, we generate a value and add it to the controller’s sink. The stream is a constant pipe connection. When we add a value to sink, all objects listening, can view it.
Here’s an example on how we can listen to a stream:
import 'package:bambam/sampleStream.dart';
sampleStreamListener() {
final brandNewStream = SampleStream();
brandNewStream.stream.listen(
(onData) {
print("Printing the random number from the stream: $onData");
},
onError: (err) {
print("There was an error: $err");
},
onDone: () {
print("Stream has been completed");
},
cancelOnError: false
);
brandNewStream.closeStream();
}
Building The Tool – BamBam
BamBam? Sorry, I’m a fan of the Flintstones and we are building a tool that constantly hits a url…it fits. So we’ve covered Streams and Futures, now let’s talk about how they will be used. When performance testing, you always test with constraints in mind. The most common are concurrent users and duration; we will stick to those in this article.
- Duration – This value determined how long our performance test will run
- Concurrent User – This value determines how many active connections will connect to our service. This also provides a way to determine the amount of streams we will need.
Let’s first build out our stream class
| import 'dart:async'; | |
| import 'package:http/http.dart' as http; | |
| class BamRequest { | |
| BamRequest() { | |
| Timer.periodic(Duration(seconds: 1), (t) async { | |
| final startTime = DateTime.now(); | |
| final response = await http.get('https://jsonplaceholder.typicode.com/todos/1'); //await http.get('https://google.com'); | |
| final endTime = DateTime.now(); | |
| final report = BamReport(endTime.difference(startTime), response.statusCode); | |
| _controller.sink.add(report); | |
| }); | |
| } | |
| void closeStream() { | |
| _controller.close(); | |
| } | |
| final _controller = StreamController<BamReport>(); | |
| Stream<BamReport> get stream => _controller.stream; | |
| } | |
| class BamReport { | |
| Duration _latency; | |
| int _status; | |
| BamReport(this._latency, this._status); | |
| Duration get latency => _latency; | |
| int get status => _status; | |
| @override | |
| String toString() { | |
| return "Request: Latency ${_latency.inMilliseconds}ms, Status: $_status"; | |
| } | |
| } |
Similar to the Stream example before, but somewhat more involved implementation. Instead of returning a simple integer, we are recording the duration of the http call, creating a report of the call, and sending that to the stream. Our report class, for now, only reports the latency and status of the http call.
Next is the building of our concurrent users, compiling reports for a specified duration, and then reporting back to the user.
| import 'dart:io'; | |
| import 'package:bambam/bamRequest.dart'; | |
| main(List<String> arguments) async { | |
| final bamReportList = List<BamReport>(); | |
| final bamRequestList = List.generate(5, (index) { | |
| return BamRequest(); | |
| }); | |
| bamRequestList.map((request) => request.stream).forEach((stream) { | |
| stream.listen( | |
| (data){ | |
| bamReportList.add(data); | |
| }, | |
| onError: (err) { | |
| print('Error: $err'); | |
| }, | |
| cancelOnError: false | |
| ); | |
| }); | |
| Future.delayed(Duration(seconds: 20), () { | |
| bamRequestList.map((request) => request.closeStream()); | |
| generateReport(bamReportList); | |
| exit(0); | |
| }); | |
| } | |
| generateReport(List<BamReport> reportList) { | |
| var avgLatency = reportList.map((report) => report.latency.inMilliseconds).reduce((report1, report2) => report1 + report2) / reportList.length; | |
| print(""" | |
| —————————————- | |
| Performance Report | |
| Average Latency: ${avgLatency.toStringAsFixed(2)} ms | |
| # of request: ${reportList.length} | |
| """); | |
| } |
From the top, this class is doing the following:
- Creates a list to hold our reports
- Generates a list of BamRequest objects. In this example, I’m sticking to 5 concurrent users, so I made a list of 5 streamable objects.
- Next we access the stream for each object in our list, then we define our listen function for each stream. On each addition of data, we will add the resulting report to our reports list.
- Next we use a delayed future to close all the streams, generate a report, and close out.
When you run it, a sample result will look like this:
---------------------------------------- Performance Report Average Latency: 114.86 ms # of request: 95
What’s Next?
The next article in this series will cover adding a tool like this to the pipeline. In the mean time, I will make several updates to this tool, such as:
- Allowing command line variables for customization of duration and concurrent users, and latency limit. (If I have time, I’ll update the tool to handle custom urls…but we’ll see).
- A pass/fail system based on calculated latency.
- Adding distribution data to BamBam
- Adding BamBam too a CI/CD tool (probably Jenkins)
All source code for BamBam can be found here.
Additional Resources
Below are some useful links if you’d like to learn more about Dart, Flutter, Streams/Futures, and load testing
- Performance Testing flavors – https://assertible.com/blog/web-service-performance-testing-tips-and-tools-for-getting-started
- Dart – https://dart.dev/
- Flutter – https://flutter.dev/
- An excellent video on Streams by the Flutter Dev Team
