Datapel API Reference
Welcome to the API Reference for the Datapel Cloud.WMS Inventory Management System (IMS), a powerful and versatile tool designed to streamline the management of order and inventory data for businesses of all sizes. This documentation serves as your comprehensive guide to understanding and use of our API, providing you with the information and resources needed to integrate our system seamlessly into your applications and processes.
Why an Inventory Management System Matters
Efficient order and inventory management is the cornerstone of successful business operations. Whether you are a small wholesaler, a manufacturing powerhouse, or an e-commerce giant, the ability to track, monitor, and control your inventory is essential for reducing costs, improving customer satisfaction, and driving profitability. Our Inventory Management System is designed to simplify this complex task, offering real-time insights, process automation, and scalability.
What Can You Achieve with Our API
The Datapel API opens up many possibilities for developers, businesses, and entrepreneurs seeking to optimise their inventory and order management processes. Here are some potential applications and use cases that can and have been developed using our API:
- E-commerce Integration Seamlessly sync your e-commerce platform with our WMS to ensure accurate product availability and pricing on your online store. This leads to fewer out-of-stock situations and enhances the overall shopping experience for your customers.
- Warehouse Automation Implement robotic or IoT solutions to automate inventory movement within your warehouses. Our API provides real-time inventory data, enabling robots or devices to make informed decisions about restocking and retrieval.
- Multi-Location Management Manage inventory across multiple physical locations, whether they are warehouses, retail stores, or distribution centres. Our API allows you to centralise inventory control and maintain consistency across your network.
- Customer and Supplier Collaboration Facilitate better communication with your customers and suppliers by sharing real-time inventory data. Suppliers can use this information to optimise production schedules and ensure timely deliveries, reducing the risk of stockouts. Customers can check availability and account status reducing overall customer servicing costs.
- Inventory Forecasting Utilise historical data and predictive analytics to forecast demand and optimise inventory levels. With our API, you can build custom forecasting algorithms and strategies to minimize overstock and understock situations.
- Order Fulfillment Automation Create systems that automate order fulfillment processes, ensuring orders are picked, packed, and shipped efficiently, with real-time inventory updates for customers.
- Inventory Reporting and Analytics Develop custom dashboards and reporting tools to gain valuable insights into your inventory performance. Track key metrics such as turnover rate, holding costs, and sales velocity.
- Inventory Auditing and Compliance Implement auditing solutions that use the IMS API to verify inventory accuracy and compliance with industry regulations. This is especially crucial in sectors with strict quality control requirements.
- Inventory Optimization Algorithms Design algorithms that automatically optimise inventory reorder points, safety stock levels, and economic order quantities based on real-time market conditions.
- Retail Shelf Management Optimise the placement of products on store shelves by utilising our API to ensure high-demand items are always readily available, leading to increased sales.
The Cloud.WMS API empowers you to take control of your inventory and order management processes and drive operational excellence across your business. Whether you are looking to enhance the efficiency of your supply chain, boost customer satisfaction, or make data-driven decisions, our API is your gateway to achieving these objectives with precision and ease.
In this API Reference, you will find detailed documentation, code examples, and best practices to help you harness the full potential of our Inventory Management System API.
What made this section unhelpful for you?
Base URL
Production:
https://api2.datapel.net/api.datapel/v2.0
What made this section unhelpful for you?
How it Works
Retrieving Data
Use the HTTP GET method to retrieve data from your workspace via endpoint resources. This includes both lists and individual records. For example, you would use the GET method to retrieve a list of sales orders made in a particular month, or the contact details of a customer.
- Successful responses return with a HTTP 200 status code
- By default all successful responses on the Datapel API are returned as JSON.
- The API requires a security domain or scope to be provided with TOKEN requests and REFRESH, for public api resources use pub.
- Forms and Reports can also be returned in PDF format ensure you apply the correct content type header when required.
- Example GET request for a particular product would look like this → https://api2.datapel.net/api.datapel/v2.0/pub/product?34
JSON responses and date formats
By default JSON formatted responses are returned, be sure to use the http header “application/json” when making a request.
Any Date returned in JSON datasets is UTC format represented as YYYY-MM-DDTHH:mm:SS, for example ‘2009-04-29T10:12:20’.
In order to reduce overall data transfer requirements a record level TIMESTAMP is typically available with format ‘2021-10-01 16:35:03’ indicating the time this record was last modified.
Filtering Data
The Datapel API uses the OData standard as the syntax allowing queries of specific data records. By using OData filter queries, you can obtain specific subsets of data, reducing the amount of unnecessary information and enhancing the performance of your workflows.
When using filters, you can use comparison operators like eq for equals, ne for not equals, and so on. You can also use logical operators such as and, or, and not to combine multiple conditions. Additionally, there are functions to refine string-based searches. For example, startswith(fieldName,’string’) checks if a field starts with a specified string, endswith(fieldName,’string’) checks if it ends with one, and substringof(‘string’,fieldName) verifies if a field contains a particular substring.
To explore the syntax and example queries review the OData Standards Support section of this reference guide.
Pagination
A single HTTP GET request can return at most 500 records. This is reasonable because most HTTP clients have a timeout limit of 2 - 5 minutes. They do not allow transactions to wait more than that. A request running longer than the limit gets an HTTP timeout error. Therefore, it is often required to tune complex queries and lower the data size for them to complete within the timeout restriction. Long running queries can also degrade server response times to other users and tenants so its best practice to do smaller queries more often than large one off data requests
The OData API provides several pagination options for query results.
Client-side pagination uses query options on the client side to create an offset that restricts the amount of data returned from the server.
A client-side pagination query can be made using the following URI parameters:
- The $top parameter indicates the number of records to return in the batch. The default and maximum number is 500 records.
- The $skip parameter indicates the number of records in the full data set to skip before fetching data.
For example, a query with $skip=2000&$top=500 returns the fifth page of data where the page size is 500.
Advantages
- Faster initial page load
- It works well with OData compliant list and table web controls to fetch records on demand. DevExtreme UI controls from DevExpress automatically change the $skip value in order to scroll through the data set.
- It allows reverse scrolling by decreasing $skip values. This allows UI controls to avoid buffering all data as the user scrolls up and down.
Considerations when using OData pagination
- For each page, the query is re-executed and then moves forward through the data set to reach the specified $skip value. Not only is this inefficient but also it can lead to unstable results. The data set could be changing due to simultaneous creating, updating, or deleting of the data. Conversely, most modern databases implement caching and will generally store prior query data so this does not present as a major limitation or issue.
- If a record is inserted on a prior page after it has been read, it shifts the data set to the right, resulting the last record from the previous page to be pushed into the current page. This causes the page to be read again because the $skip value of the current page is one record too small, resulting in duplicates in the total data set. You can avoid this issue using $orderBy=TimeStamp to push newly created records to the end of the data set.
- If a record is deleted from a prior page after it has been read, it shifts the data set to the left. The first record of the current page is lost because the $skip value of the current page is one record too large. There’s no good workaround when dealing with concurrent hard deletes from the data set. Note these potential data loss issues make "last modified since" queries especially unreliable. If a record is lost in a query, it can only be picked up by a later attempt after it's modified.
Most queries will return a total record count which allows your request to select a exact page number, and identify the total number of pages to retrieve. Alternatively continue to iterate over the query using the $skip keyword until no records or less than 500 (page maximum) are returned.
What made this section unhelpful for you?
OData Standards Support
What is OData?
OData is a way to query data sources over the web using a standard syntax. It allows requesting of specific data columns with filter expressions to return required data sets typically in JSON format. Because the OData specification is standardised and ISO/IEC approved, you can theoretically use any OData compliant data source and it will respond in the same way to an OData formatted query.
Parameters are made up of the various OData functions for selecting and filtering entities.
The main OData functions are:
- $select
- $expand
- $filter
- $orderBy
- $top
- $skip
- $apply
These parameters are chained together after the entity using '&':
- Get the country and company name of all customers where country is 'UK' - http://services.odata.org/Northwind/Northwind.svc/Customers/?$select=CompanyName,Country&$filter=Country eq 'UK'
$filter
Filters allow you to reduce the data set by specifying a field and a requirement. There are 2 formats for filter:
- $filter=[Field Name] [Operator] [Value]
- $filter=([Field Name] [Operator] [Value])
The correct format to use depends on the operator you are using.
Operators for first format:
- eq - Exactly equal to
- ne - Inverse of eq
- lt - Less than but not including
- gt - Greater than but not including
- le - Less than or exactly equal to
- ge - Greater than or exactly equal to
Operators for second format:
- contains - Value is contained somewhere in the field
- not contains - Inverse of contains
- startswith - Field starts with value
- not startswith - Inverse of startswith
- endswith - Field ends with value
- not endswith - Inverse of endswith
There are other more specific expressions which you can find at the Microsoft Documentation on filters.
Filters can be chained together internally with 'and' or 'or' to create more complex expressions:
- $filter=([Field] [Operator] [Value]) or/and ([Field] [Operator] [Value])
The in (v1,v2,v3…vN) keyword is not supported, you need to create an expanded filter as (Field eq v1) or (Field eq v2) or … (Field eq vN) where N should be less than 500.
Field Name references and Array $filter
The convention for field name referencing is based on the JSON response property name. There are some limitations on the scope of filtering by field name
and on how the response is returned. Field name paths are only support for first level child nodes where the property name is not unique, in all other cases child node path is not required.
Matching scope $filter restrictions for APIv2:
- JSON
Select...
{ “FieldName1” : “value1”,“Items” : [ { “ArrayElementFieldName” : “arrayValue1” },…], "FieldName2" : “value2”,…}
- $filter on fields contained within an Item array need to be handled using string search techniques, for example to return all Items that contain arrayValue1 the filter expression would be defined as
$filter=contains(Items, ‘arrayValue1’)
- Note also that while the parent elements are returned only the matching
item array lines
will be returned, if you require all item Array elements it is recommended that a two pass query approach be applied. Use the$select=Uid&$filter=contains(Items,'…')
, then query on the array ofUid
response elements as$filter=Uid eq value1 or Uid eq value2 …
When filtering on child nodes path elements are only valid to one level, for example $filter=parentnode/childproperty eq ‘value1’
is valid, where the following is not valid as $filter=parentnode/childnode/childproperty eq ‘value1’
$orderBy
The orderBy function allows you to specify a sort for your data. The syntax for this is:
- $orderby=[Field Name] [Direction]
Direction is either ascending or descending as shown in these examples:
- Every Product ordered by release date - http://services.odata.org/V4/OData/OData.svc/Products/?$orderby=ReleaseDate asc
- Summary of sales by year, largest subtotal first - http://services.odata.org/Northwind/Northwind.svc/Summary_of_Sales_by_Years/?$orderby=Subtotal desc
$top and $skip
Top and Skip tend to go hand in hand. Top can be used to limit the number of records returned, while skip allows you to start from further down your list of results. In order to create PAGED results use the top keyword to specify the number of records and skip * page * number of records per page to get the a particular page in the result set.
The syntax for these is:
- $top=[number]
- $skip=[number]
Some examples:
- Get the first 50 Contacts, skipping the first Contact - http://services.odata.org/V4/OData/OData.svc//Contacts?$top=50&$skip=1
- First 5 Products ordered by release date - http://services.odata.org/V4/OData/OData.svc/Products/?$orderby=ReleaseDate asc&$top=5
OData Limitations
Datapel API Supports basic OData syntax for keywords select, filter, skip, top, order by and apply. Currently it does NOT support expand and lambda operators.
$filter limited support
- All logical operators except has and in are supported.
Spec: http://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html#sec_LogicalOperators Examples: http://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html#sec_LogicalOperatorExamples
- Grouping filters using () is supported.
Spec: http://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html#sec_Grouping
- String functions startswith, endswith and contains are supported.
- DateTime functions date, time, year, month, day, hour and minute are supported.
Spec: http://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html#_Toc31360996
$apply limited support (aggregations)
Aggregations using Odata apply query option is supported.
- Transformations: filter, groupby and aggregate are supported. expand, concat, search, top, bottom are NOT supported.
- sum, min, max, avg, countdistinct and count are supported.
- $filter field names MUST BE included in the groupby column name selection list, calculated fields can be used in the filter expression, using the example below
$filter=
Total gt 0
Select...
\orders?$apply=groupby((Country),aggregate(Amount with sum as Total,Amount with average as AvgAmt))
Select...
SELECT [Country], Sum(Amount) AS Total, Avg(Amount) AS AvgAmt FROM [Orders] GROUP BY [Country]
What made this section unhelpful for you?
OData Web Components
To simplify web application development OData compliant web components can bind directly to the API endpoint services. For example the DevExpress DevExtreme javascript visual components have OData 2.0 and 4.0 compatible controls. DevExtreme supports a multitude of frameworks including KnockOut, Angular, React and others. The primary adapter for the API was selected as OData to align with the Microsoft Web Services Stack and the ability to standardise the filtering and selection syntax that simplifies SQL translation.
For general information on OData compliance https://www.odata.org/
For more information on DevExtreme Web Components please refer to the following https://js.devexpress.com/
On OData refer to https://js.devexpress.com/Documentation/Guide/Data_Binding/Specify_a_Data_Source/OData/
For specific connector declaration visit https://js.devexpress.com/Documentation/ApiReference/Data_Layer/ODataStore/
What made this section unhelpful for you?
OData Best Practices
Read and follow the best practices in this topic for optimal OData API performance and a better user experience.
Query Only Modified Records | Instead of querying all records, query only the records that have been modified since your last execution for integration use cases. |
Avoid Frequent Query Runs to Get Real-Time Results | Repeating queries in much less than one hour will consume too much API resource, which may be throttled in the future. In extreme cases, you may even be denied of service because of frequent queries, especially ones that require heavy backend processing. |
Avoid Excessive Queries for Single Records |
|
Use $select to Get Only the Properties You Need | Without the $select query option, a query returns all available properties of an entity. For better performance, we recommend that you add the $select system query option to specify only the properties you need. |
Keep Transactions Simple to Avoid Poor Performance | Poor performance is often a sign of misuse of APIs. As a rule of thumb, always keep your transactions simple. |
Sync Frequently and Sync Only Delta Changes | Full data sync can lead to high resource consumption and poor performance. Instead of doing a full data sync intermittently, schedule more frequent data sync but limit the scope to only delta changes. Keep in mind the current limits for the public api are 10,000 requests per day with not more than 2 requests per second. |
Implement Retry Logic Properly | Retry logic can help recover failed transactions due to Internet connectivity or backend server issues, but retry must be attempted with caution based on the type of HTTP error. In all cases, wait at least between 1 to 5 minutes before attempting a retry. Excessive immediate retry can cause denial of service giving little time for the server to recover. Retry no more than 5 times before abandoning your task. Error types eligible for retry:
json
Errors for which retry shouldn’t be attempted:
|
Avoid Excessive Client Multithreading | To prevent server overloads and ensure high availability, we recommend a limiting the use of concurrent API requests for any company instance. If you require a real time, high duty integration you may need to discuss and Enterprise API option which removes daily quota and rate limits |
String Encoding on POST | JSON json
|
Special character | escaped form | replaced by | |
Ampersand |
| & | |
Less-than |
| < | |
Greater-than |
| > | |
Quotes |
| " | |
Return, Linefeed, Tab |
| Ascii(13,10,9) | |
Single Quote (no escaping required) |
| ' |
What made this section unhelpful for you?
Authentication
Bearer Token Authentication
Datapel API version 2.0 uses Bearer Token Authentication, a crucial security mechanism that safeguards our API endpoints and ensures secure communication between clients apps and our API servers. This section describes what Bearer Token Authentication is, how it works, and how to manage secure API access.
Bearer Token Authentication is a widely adopted method for securing APIs, relied upon by countless applications and services across the internet. It offers a robust, standardized approach to verifying the identity of clients attempting to access an API resource. With Bearer Token Authentication, you can establish trust between the API server and its clients, ensuring that only authorized parties gain access while keeping unauthorized users at out.
In this documentation reference, we will cover the following key topics:
- Understanding Bearer Tokens: Get acquainted with the concept of bearer tokens, their structure, and how they function as credentials for authenticating your API requests.
- Token Generation and Management: Learn how we generate, issue, and manage bearer tokens securely, with best practice token lifetimes and refresh mechanisms.
- Authentication Flow: Understand step-by-step the process of how client apps present bearer tokens in API requests, and how our API server validates these tokens to grant or deny access.
The following sections describes our implementation of the Bearer Token Authentication process.
Understanding Bearer Tokens
Bearer Tokens are at the heart of Bearer Token Authentication, serving as the digital keys that grant access to API resources.
What is a Bearer Token?
A Bearer Token is a type of access token used in authentication protocols, such as OAuth 2.0 and OpenID Connect. It is a string of characters, typically represented as a long, random alphanumeric sequence. Bearer Tokens act as credentials that a client (usually a user or application) presents to an API server when making a request for a protected resource.
How Do Bearer Tokens Work?
Bearer Tokens operate on a straightforward principle: possession of the token equals access. When a client wishes to access a protected resource via an API, it includes the Bearer Token in the request, typically within the HTTP Authorization header. The server receives the token, validates it, and if the token is valid and not expired, it grants access to the requested resource.
Stateless Authentication
One of the advantages of Bearer Tokens is their statelessness. Unlike session-based authentication, where server-side state is maintained for each user, Bearer Tokens require no server-side storage of session data. This makes them highly scalable and suitable for stateless API architectures, such as RESTful APIs.
Security Considerations
While Bearer Tokens offer an efficient way to authenticate API requests, they come with security responsibilities. Since anyone in possession of a Bearer Token can access the protected resources, it is crucial to protect these tokens from theft and misuse. Security measures include token encryption, short token lifetimes, and token revocation mechanisms.
Token Payload
Bearer Tokens often contain a payload that carries information about the client or user, such as user roles or permissions. This payload can help your API server make access control decisions based on the token's content.
In summary, Bearer Tokens represent the authorization credentials clients use to access an APIs protected resources. Understanding their structure, how they are transmitted, and their security implications is fundamental to implementing secure and efficient API based Application.
Token Generation and Management
Token generation and management are critical aspects of Bearer Token Authentication. To ensure the security of the Datapel API our tokens are 256 bit encrypted and have a limited life span of 30 minutes. A refresh token is provided to allow trust renewal by following the Refresh Authentication flow.
The Bearer Token Authorization flow involves a series of steps that allow a client (such as a user or application) to authenticate itself and gain access to protected resources on our API. This flow is based on the OAuth 2.0 framework and is widely used for securing APIs. Here's a step-by-step breakdown of the typical Bearer Token Authorization flow:
- Client Registration (Optional): In most cases, our clients need to be registered with API access to our server. During registration, you will receive an API key and secret, which are used to identify and authenticate your application. This step is commonly seen in OAuth 2.0.
- Authentication Request: Your app initiates the authentication process by sending a request to our API TOKEN endpoint.
- Token Issuance: The API server processes the token request, verifies identity, and, if everything checks out, issues an access token. This access token is a bearer token, and its purpose is to prove the client's authorization to access protected resources. Included in the token is a refresh token which allows renewed trust access tokens without needing the initial API key or User credentials.
- Token Usage: The client includes the access token in the
Authorization
header of its API requests, using the "Bearer" prefix (e.g., "Bearer [access token]"). This token is what grants the client access to the API resources. - Token Validation: When the API server receives a request with an access token, it validates the token to ensure it's genuine, unexpired, and has the necessary permissions to access the requested resource. This step often involves checking the token's digital signature, expiration time, and scope.
- Access Control: If the token is valid, the API server grants the client access to the requested resource. Access control decisions may also be based on the user's identity and the permissions associated with the token.
- Token Revocation (Optional): To maintain security, tokens can be revoked by either the client or the authorization server when they are no longer needed or if they are compromised.
- Token Expiration and Refresh (Optional): Access tokens have a limited lifetime. Your application can request a new access token using a refresh token if the original token expires. This helps maintain the session's longevity while minimizing security risks.
- Logging and Monitoring: Our API has backend monitoring and logging and if required we can assist with troubleshooting any access issues you may experience by reviewing request history.
Endpoint Reference
Welcome to the Endpoint Resources Listing section of our API reference. This comprehensive guide provides a detailed overview of all available endpoints and resources within our API. Whether you're a developer looking to integrate our services into your application or an experienced user seeking to explore the capabilities of our API, you'll find everything you need to get started and make the most of our platform.
In this section, you will find a structured and organized list of API endpoints, each accompanied by a description of its purpose and usage. We have designed our API with simplicity and flexibility in mind, ensuring that you can efficiently interact with our platform to meet your specific needs.
To streamline your development process, we have categorized the endpoints into logical groups and provided clear explanations of the parameters and data formats associated with each resource. Additionally, you'll find information on authentication methods, response formats, and any specific requirements for using particular endpoints.
Whether you're retrieving data, creating records, or performing complex operations, this Endpoint Resources Listing is your go-to reference for understanding and utilizing our API effectively. Feel free to explore the endpoints that align with your project's goals and requirements, and use this documentation as your trusted companion on your API integration journey.
Please refer to the table of contents or use the search functionality to quickly locate the endpoints and resources that interest you. If you have any questions, encounter issues, or need further assistance, our support team is here to help.
When reviewing the Post or Response Body it is recommended to click the expand all
option as shown below to display all nodes and properties.

Let's dive in and explore the wealth of possibilities our API offers. Happy coding!
Change Log
See all updates and changes to our API listed by Date.
Date | Version | Description |
2024-OCT-28 | 2.0.1028 | New Estimates endpoint addedEndpoint Name : estimates Change : Use endpoint to get list of estimated kit and assembly units that can be built based on available component inventory. |
2024-AUG-26 | 2.0.0826 | New JobCodes endpoint addedEndpoint Name : jobcodes, jobcode Change : Use endpoint to get list of jobcodes and jobcode by Uid. |
2024-JUL-24 | 2.0.0724 | Documentation clarificationEndpoint Name : all GET endpoints Change : Clarification on use cases for $apply with $filter keywords in OData Standards Support sec |
2024-July-12 | 2.0.0712 | New Location Bin endpoint addedEndpoint Name : locationbins Change : Use endpoint to get list of location bins. |
2024-July-09 | 2.0.0709 | New Attachment POST endpoint addedEndpoint Name : attachment Change : Use endpoint to add document attachments to products or order transactions. |
2024-July-08 | 2.0.0708 | New WorkspaceOptions endpoint addedEndpoint Name : workspaceoptions Change : This endpoint provides configuration and workspace setting information. |
2024-July-06 | 2.0.0706 | New AuditEvent POST endpoint addedEndpoint Name : auditevent Change : This endpoint is designed to allow integration developers to provide status, security, error and warnings directly to the audit list within the customer workspace. |
2024-July-03 | 2.0.0704 | Notice on property retirementEndpoint Name : Inward Order List Property : Change : This field causes confusion between Despatch and Sale Order properties and intended as Supplier Invoice Number or Supplier Reference Number. To address the ambiguity the field will be renamed to |