Our role in developing the warehouse intelligence infrastructure at PSI
PSI Poland is a part of PSI AG, with its headquarters in Berlin, which has been producing software for the industry for over 50 years. The PSI Poznań branch achieved success in the implementation and development of the PSI WMS system for warehouse management.
As part of the development of PSI WMS, the team developed the concept of Warehouse Intelligence, using artificial intelligence to optimize logistics processes.
The goal was to create an intuitive and useful infrastructure
The PSI team asked us to develop an infrastructure that was intuitive to use, quick to boot, and automated. The environment was to be prepared so that a person without advanced technical knowledge could manage it using all available components. Optimizing the time for creating new environments and running tests was very important. The target platform was to be used to develop Machine Learning models in the form of experiments using the customer’s software and various hardware configurations.
Platform development – step by step
As we began our work, we had to consult, learn about needs, and carefully plan the development of the environment. Due to the planned testing on different sizes of virtual machines, the customer decided to develop the platform directly in the cloud environment.
PSI was given a very flexible environment, scalable for specific tasks.
Additionally, the company did not have to invest in purchasing the hardware needed for software development and verification. In the case of research and development environments, it minimizes the risk of improper hardware selection and eliminates the problem of aging infrastructure.
When preparing infrastructure for our customers, we virtually always use IaC technology. In this case, Infrastructure as Code could show its significant advantages and make it possible to build multiple copies of the environment in a short time, providing the opportunity to work simultaneously in different configurations.
The entire environment was built on the Amazon Web Services platform, using Amazon Elastic Compute Cloud (EC2) instances, and due to the planned high loads, we implemented Amazon Elastic Kubernetes Service (EKS), which allowed us to increase the number of instances in moments of high demand and reduce when there was no such need.
We automated the process of scaling EC2 instances for the customer’s convenience. At the same time, our solution made it possible to run EC2 instances with GPUs almost instantly, enabling GPU-based Machine Learning model-building tests when needed by developers and specialists working with the customer in this area. We launched GitLab for the project and described all the processes using CI/CD – GitLab CI is an interface for managing the whole platform from the code repository level and properly developed processes (GitOps approach). The GitOps tool we used is ArgoCD, which was deployed in the Kubernetes platform cluster space.
The Network Load Balancer service provided convenient and controlled management of incoming traffic to the cluster. We used it deliberately, as we wanted an environment that was built using Cloud Native technology and replicable with other public cloud operators or within a private cloud.
We put the resulting data into Amazon Simple Storage Service (S3), an object storage service that gave us scalability, data availability, security, and performance. The logs generated during platform experiments are sent to the ElasticSearch service we ran for the customer, along with the LogStash and Kibana components.
We described the created infrastructure using Terraform code, so the customer can quickly run it in different configurations with versioning in the previously mentioned GitLab tool. The solution we have introduced allows for various Machine Learning experiments to be conducted using dedicated customer software, depending on the hardware used.
The environment automatically and quickly adapts to the current demand for computing power through scaling mechanisms. All infrastructure elements were placed in containers and orchestrated using Kubernetes technology. The customer received a highly resilient environment for testing and ongoing software development.