When we introduced the first round of the Power BI REST APIs, we made it really easy for all different types of applications and devices to land data in Power BI. We have seen developer’s connect all kinds of IoT devices to Power BI in order to visualize and see the most data update in real-time in a Power BI dashbaord. However pumping in data at a couple hundred rows per second will cause your datasets to become very big very fast.
So what do you do with all this data? How do you enable real-time exploration and monitoring of data without maxing out the size of your datasets or your subscription? In the latest release of the API, we have introduced a default retention policy concept which will allow you to automatically clean up old data while keeping a constant window of new data flowing in.
The first retention policy that we have released is called basic first in first out (fifo). When enabled, data will collect in a table until it hits 200,000 rows. Once the data goes beyond 200,000 rows, the oldest rows will get dropped from the dataset. This will maintain between 200,000 and 210,000 rows of only the latest data.
The retention policies can be enabled when you first create your datasets. All you need to do is add the “defaultRetentionPolicy” parameter to your POST datasets call and set it equal to “basicFIFO” like so:
POST https://api.powerbi.com/beta/myorg/datasets?defaultRetentionPolicy=basicFIFOContent-Type: application/json Authorization: Bearer eyJ0-XXXXXXX-XXXXXX-XXXXXXX-6QNg { “name”: “SalesMarketing”, “tables”: [ { “name”: “Product”, “columns”: [ { “name”: “ProductID”, “dataType”: “Int64” }, { “name”: “Name”, “dataType”: “string” }, { “name”: “Category”, “dataType”: “string” }, { “name”: “IsCompete”, “dataType”: “bool” }, { “name”: “ManufacturedOn”, “dataType”: “DateTime” } ] } ]} |
You can try the call out now using our interactive API console: http://docs.powerbi.apiary.io/#reference/datasets/datasets-collection/create-a-dataset.
For more information on the APIs please check out http://dev.PowerBI.com.