-db is when you want to save the telemetry into mongodb DB.product_id is also one of the value, as the simulator will derive the target temperature and humidity level from its internal datasource:.As the simulator is taking some data from internal datasource you can use only one of those values: mongodb-url (in the format of hostname-a,hostname-b, as the endpoint is a paired replica set).Utilizing Databases for MongoDB on IBM Cloud, the following Kubernetes Secrets are required to be created from the auto-generated Service credentials in the target namespace: If you have use case needs that require other locations for data to be saved or transmitted, you can either adapt the Job here or follow the other sections of this document. This is the most direct method for telemetry data generation and what is necessary for the "happy path" version of the environment deployment. The predefined job for this task will create the required telemetry data and save it directly to the configured MongoDB database instance. These are runnable on any Kubernetes platform, including OpenShift. In an effort to keep development systems as clean as possible and speed up deployment of various scenarios, our deployment tasks have been encapsulated in Kubernetes Jobs. Create and save telemetry data with Kubernetes Job running on remote cluster (RECOMMENDED).We have provided the following documented methods for populating the Product database: We use MongoDB as a Service on IBM Cloud in our reference implementations. when it runs to simulate reefer container telemetry generation (2), it creates events to Kafka topic, and a stream application can save telemetry records to MongoDB too. The simulator can run as a standalone tool (1) to create training and test data to be saved in a remote mongodb database or can be used to save to csv file. The data generation environment looks like in the figure below: We have defined some sensors to get interesting correlated or independent features.Īs of now, our telemetry event structure can be seen in this avro schema.įor the machine learning environment we can use a csv file or mongodb database or kafka topic as data source. The historical data need to represent failure and represent the characteristics of a Reefer container. In the industry, when developing new manufactured product, the engineers do not have a lot of data so they also use a mix of real sensors with simulators to create fake but realistic data to develop and test their models. We are using the simulator to generate data. Using MongoDB on Openshift 4.2 on-premiseĭata ingestion and in motion with Cloud Pak for Integrationĭefine the anomaly detection scoring model with OSSĭefine the anomaly detection scoring model with Watson Studioĭevelop scoring app with cloud pak for applicationĮngineer dispatching with Business process with Cloud Pak for Automationĭevelop the scoring app with event messaging and microprofileĭevelop the scoring app using open source stack Generate O2 sensor malfunction in same fileĪdd data using the telemetry repository of the simulator Generate Co2 sensor malfunction in same file router.Create and save telemetry data with Kubernetes Job running on remote cluster ![]() But before that, this function needs to be asynchronous to work. Then it will generate a package.json file in that folder. This command will ask you for various details, such as the name of your project, the author, the repository, and more. ![]() In an empty folder, run the following command: npm init If not, go to to download and install it. We will create endpoints for creating data, reading data, updating data, and deleting data (basic CRUD operations).īut before we get started, make sure you have Node installed in your system. In this article, we'll build a RESTful API using Node, Express, and MongoDB.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |