Be effective with Bitrise CI for android – the lessons I learned the hard way
I won’t elaborate here on how important and crucial for any software development-oriented team the continuous integration (CI) practice is. I’m pretty sure we can all agree on how CI tools support our day-to-day effectiveness. How they might save dozens of hours spent on non-essential tasks. Yet, it’s common to present CI tools as a hassle; slow, bulky, and unreliable pipelines bloated with chaotic events instead of fast, maintainable feedback loop configured to support both product quality and team flexibility.
Anyway, I hope that after reading this article you will be able to apply those principles to any interface architecture pattern such as MVP, MVC, MVVM, or MVI.
The first part says why and when to adapt such a technique. You can jump to the second part for the list of recommendations.
As the title implies, our CI process was far from optimal. We learned what “slow and chaotic” means the hard way. Below, you will find an overview of each issue that slowed us down, with a full explanation of what the solution was (including code and external links), as well as honest results measured by minutes.
In this article, you will find discussion surrounding architecture, flavor agnostic unit testing, Gradle usage as well as keeping your logs and artifacts deployment in order. Additionally, at the end of the article, several tips and tricks beyond optimization will be included.
It’s not a step-by-step tutorial. We gathered results that work for us, and you have to think them through. If those solutions make sense to you then, and only then, apply them to your environment.
In order to fully understand why we provided a particular optimization, it is crucial to understand how our landscape looked at the time.
There is git-flow approach in place, which usually means multiple feature branches exist at the same time in a remote repository. There is at least one pull request per story. Each pull request needs to go through an integration process meaning the newest commit in a pull request triggers a fresh CI build. That’s being done in order to ensure the newest change won’t introduce any flaws. Yep, automation and unit test suites test each software incrementation. Software Engineers in Test (SET) writes automation tests as “a part of“ the feature in some cases.
We are supporting multiple modules as a part of our architecture.
Let’s assume it is a clean-ish architecture with domain, data, and app layers packed into separate modules. Each of the modules has its own unit tests suite — between dozens to a few hundreds of them per module. We have to support multiple flavors and they differ greatly. Each flavor has a separate set of automation and unit tests, although most of them are shared.
When it comes to infrastructure, there is a separate Bitrise workflow for every build type. Also, a separate one for each of feature development, automation efforts, release (tags) activities, and after merging feature to the develop. Seeing how many distinct configs we have, there is a need to run multiple builds every day. We can’t and won’t have an “infinite” amount of concurrent jobs, so time devoted to each build is very important to us. It’s also important because we value stuff done the right way.
The basic measurement that will prove effectiveness here is build time — both entire build time or a particular step time (such as unit tests step or deploy step).
The most commonly used feedback loop is unit tests suite, in particular, if you’re supporting multiple flavors for Android app and you want to be sure that none of the changes would break any of the flavors. Unit tests are supposed to be a fast and reliable feedback loop, which can be automated at the CI level. So, we used docs and tutorials to set them up for all of the flavours. After a few changes to CI, we ended up with 30 minutes long unit test step for 3 flavours. Yes, you read it properly: 30 minutes for 3 flavors.
Ok, let’s fix that.
After a little bit of research, it occurred to us that we used two separate steps for unit tests. Android unit test step for Bitrise was running app module unit tests. Gradle Unit Test step was just running .gradlew test task.
What’s wrong with gradle unit test step in our case? According to the Gradle documentation:
In simple terms, ./gradlew test triggers dozens of different test tasks from every module. In our case, it triggered both debug and release-related tests for every subproject (module). That’s too much redundancy; consider the final result of ./gradlew test command:
(amount of flavors) x (amount of supported envs) x (amount of modules)
But the amount of tasks triggered is not all we can improve here.
I already mentioned we have several modules. Since it’s cleanish architecture, it consists of app, domain, data and api modules. It’s easy to see that some of those modules are flavour agnostic — domain, data and api layers can and should be treated as libraries. Those are external dependencies that could be used via any JVM compatible code. Do we need to run those tests separately for each flavour? Of course we don’t! Where does it lead us?
Flavor agnostic unit tests
Split flavor-dependent and flavor agnostic unit tests. Gain greater control over how your application is tested. Use Gradle Unit Test step in your Bitrise.yml to run targeted flavor agnostic unit tests, like this:
Using unit_test_task attribute enables you to configure a particular task to be run. Basically, any gradle task. You can obviously chain gradle commands, but I want granularity here. Additionally, the usage of title attribute keeps build logs in order and enables you to track each step separately.
Flavor dependent unit tests
The second recommendation relates to Android unit test step for Bitrise and how flavor-dependent unit tests are managed. In most cases, I would recommend you run only what you need. But I came to the conclusion that ‘run only what you need’ could be counterintuitive in our case.
It’s really easy to break one of the flavors by introducing changes to only one of them. That’s why we ended up running unit tests for every flavor in every build. In addition, the above-mentioned set of flavor agnostic tests is triggered. What does it mean when it comes to Bitrise CI setup?
The above snippet runs unit tests for the app module for a particular flavor injected as an environment variable and a particular build variant. So, if CI builds only one flavor at time, this snippet is supposed to be triggered three times, once for each flavor. If all of the flavors are built simultaneously, then each flavor should run its own unit tests in order to avoid redundancy and save a few minutes from the build. Notice that, before _UnitTestsPerFlavour step, UnitTests_Flavor_Agnostic_Modules step is triggered. It runs flavor agnostic tests, so domain, data, and feature modules unit tests. Either way, all unit tests are always validated.
Alternatively to the above setup, you can use the following setup to hardcode which flavor’s unit tests should be run:
That way we’re all covered, no matter if we’re building all the flavors, or just one. Remember, the lesson here is flavor-dependent and agnostic unit tests should be triggered once. There is no redundancy, but there is full coverage. Every software increment is safe.
We started with around 30 minutes per build.
And finished up with the below results when running one flavor.
Down to ~5/6 minutes per build.
And also down to ~3 minutes per build when running all the flavors
at once, which means each flavor is responsible for its own unit tests finally. Yes, that’s a separate config in order to optimize build time even further.
- Xcode Test for iOS
- Android Unit Test
- iOS Device Testing
- Virtual Device Testing for Android
As noted in the documentation, by default Android unit and UI tests are deployed to Bitrise directory and are provided via the Test reports tab. They are easily accessible — but the question is — are they really necessary?
We have robust unit tests. They fail rarely in CI because the entire team writes and runs them frequently. On the other hand, it’s easy to check Bitrise for which logs failed.
Deciding what to deploy
We already changed from Android unit test step for Bitrise to Gradle Unit Test step which does not deploy unit tests reports automatically. And we want it that way. What about the rest of the artifacts? For automation builds we’ve decided not to deploy any APKs. They are not needed.
We also already know that Virtual Device Testing for Android step deploys UI tests results into the Test Reports directory. We decided that for all of the builds we are going to move or remove Deploy to Bitrise.io step completely as an experiment.
Also, Deploy to Bitrise.io step is always triggered before unit tests but after APK creation. That way, only application (uatRelease APK for example) and the UI tests report are deployed.
Initially deploy to Bitrise.io step took from 2.1 to 3.2 minutes.
After the changes, it’s 0 minutes for some builds. It is ~8 seconds for most of them.
One of the low-hanging fruits was to change what is being done as a part of a particular workflow, since they all have different goals. As I mentioned, we have feature, automation, develop and release workflow.
In our case, initially, all of the mentioned workflows had basically the same setup. Why is this wrong? Because, as we said, workflows simply have different responsibilities.
Understanding the differences in workflows
I have already mentioned the automation workflow. It’s because it is special compared to other workflows. The only responsibility automation workflow has is to support Software Engineers in Test in writing and securing automation test suite. That simple conclusion means we can trim several steps from it; in our case, APK and other artifacts creation and deployment. We were also able to get rid of custom scripts we had there for the release app or “runtime” resources optimizations steps and beyond.
By doing this, the automation build is a fast feedback loop for the SETs. It takes around 10 minutes less than other builds. I believe it’s a huge win for the SETs team.
Investigating tools configuration
Here is a quick and simple story as an example. Our builds produced uatDebug and uatRelease APKs. Sounds about right, doesn’t it? I started asking questions anyway. We were sure we need uatRelease for testing purposes. It makes sense since testing production-ready app (release) using development data (uat) is one of the best practices. But why do we need uatDebug then?
Trimming unused resources
The sole reason was a misconfiguration of the Charles proxy, which led to testers not being able to use proxy tools while testing uatRelease build variant. Famous network_security_config file had been added to the project but it wasn’t working, since the build variant has to be debuggable. The quick fix was to add android:debuggable attribute to all uat builds. And since we’re not testing uat builds using any public channels — it’s secure enough.
A simple configuration fix to the existing toolset brought an 8 minutes time reduction to each build and fixed SETs headache.
All numbers together
- Unit tests time down from 30 minutes to 3~6 minutes. Depends on build type.
- Automation build cut off by another 10 minutes through removing a few unnecessary steps.
- Artefacts deployment reduced from 2.1~3.2 minutes into 8 seconds!
- Fix to Charles configs gave us another 8 minutes — due uatDebug build removal.
We were able to shorten builds by between 48 minutes and 34 minutes per build. That was a huge win and relief as you can imagine!
We obviously made some rookie mistakes. But the most important part is to learn from them. We were able to adapt quickly and we’re providing other small improvements since then. It can’t happen on a daily basis because we also need to deliver business value to our clients — but with an appropriate plan in place, I’m sure you can do even more.
Tips and tricks beyond optimizations
Bitrise and its plugins’ documentation is quite limited. You will need to deep dive into the plugin’s code if you want to understand the platform fully. Plugins code is mostly open source — you can find links inside plugin documentation. In particular, review the main.go file if you’re looking for attributes and parameters which could customize the build.
Use Bitrise CLI in your terminal in order to test configuration locally. It will save you a lot of time.
Have as granular CI steps as possible. Use title attribute extensively. Greater readability — greater control over time. Solid foundations are the first step for future optimization.
Do what we haven’t done yet — introduce tools to measure build metrics automatically.
Leverage version control since Bitrise is similar to infrastructure as a code.
That’s a separate story but in Concentrix Catalyst, we optimised APK size by 13% during our internal hackathon day. You should be aware of best practices for Android app configuration. Get rid of or optimize resources, configuration, and APK size. These kinds of things are also impacting your build time: git pull, compilation and build time, tests, deploy time — these are some of many examples.
Listen. Observe. Experiment. Formulate a plan and adopt only what’s needed for your team. Good luck!