Data import guide

This document explains how the import tool provided in the XTreeM platform works. This platform is used by Sage Distribution and Manufacturing Operations. As the import tool supports a given data structure, the examples shown in this document use Sage Intacct entities.

A zip file containing examples of every import mentioned in the chapters is available with this document. If you use the standard templates provided in an empty tenant, you will be able to import successively every CSV file supplied.

Carefully read the appendix of the document to know how to proceed with this important data set.

A second appendix has been added showing the evolution done in the latest releases.

Integrating data in Sage Distribution and Manufacturing Operations requires to understand preliminary concepts about data organization. This is the purpose of the next chapter.

Basic principles

In every ERP system, information is highly interconnected. Starting a new ERP requires finalizing the data to ensure a good transition. This includes:

  • Core data (for example customers, suppliers, items, BOMs)
  • Backlog
  • Historical data (for example the pending orders, stock details, closed transaction history)

Manually entering this data is rare, usually data is extracted from the old system and imported into Sage Distribution and Manufacturing Operations, with a focus on ensuring consistency.

You can create or update records for entities in two ways:

  • Using APIs for mutations on these entities. It is less convenient because you need to write code. But it can be efficient to manage regular updates through a predefined interface.
  • Working with a tool to integrate a large number of lines, for example, if they are extracted from a previous system. In this case, managing CSV files is the most common way to import data. Sage Distribution and Manufacturing Operations uses this CSV format with any node with an available creation or update method.

The import tool simplifies data entry by using a standard CSV format and enforcing the same validation rules as the interactive entry process. Any document that does not pass these checks will be rejected.

Import functions

Two pages are available to manage imports:

  • Import and export templates
  • Import data

Templates definitions

A templates describes a CSV format and lists the properties from a main entity called node. You can generate a CSV file from the template. This CSV file includes a mandatory line listing the code of properties to import with specific syntaxes to define groups and sub-groups. The file also includes 2 non-mandatory lines defining the data type and a description.

Creating a template from scratch by entering a node name includes all the properties from the data dictionary. You can update it by suppressing non-mandatory information. The presetting data includes default simplified templates to make faster the onboarding of a customer.

Import function

The import function uploads a CSV file containing data to import.

It has the same header than the CSV file generated from a template, but it can be a subset of properties if the mandatory ones are present. The following lines contain the data values to be imported. At upload time, a sanity check is done on the file.

When the upload is done, you can execute the import. This will create or update records, depending on the option chosen. If both choices are selected, the record existence will be checked to execute the update, otherwise a creation will be performed. Errors will occur if existing records are found in creation mode only, or if records that don’t exist are found in update mode only.

Export of data

An import template can also be used as an export template. Every information given in this document for import format is also valid when you export data. This is possible from the main lists available for every entity. When you select a record in the list, an icon will appear to export data using one of the appropriate templates. Exporting and reimporting data is a common practice during implementation projects.

For example using the same templates for export and import you can work on:

  • Dedicated implementation environment
  • Finalizing the data
  • Making manual adjustments
  • Exporting the dataset to import it in the production environment

Data organization in Sage Distribution and Manufacturing Operations

Sage Distribution and Manufacturing Operations is based on GraphQL entities called nodes.

Node definition

A node is an entity that manages records, sets of data, and properties. The following operations can be associated with a node:

  • Read operations:
    • read to read a record
    • query to read several records
    • lookup to do a query on a subset of properties for selection purposes in the user interface.
  • Operations performing updates, called mutations in the GraphQL terminology:
    • Create, Update, and Delete are the standard mutations.
    • A node can also have specific mutations such as revert for an accounting entry, post for a sales invoice, and block for a customer.

A node includes properties that can be of any type:

  • Classic types such as strings, numeric values, Boolean, enumerations, dates, pictures, rich texts, JSON structures, and so on.
  • References to other nodes. For example, a customer can have a delivery mode reference which is another node.
  • Collections of references to other nodes. A customer can have several addresses, every address being a node. References can be vital or non-vital.
    • A vital reference cannot exist without the origin node. This is the case for order lines attached to a sales order.
    • A vital reference can continue to exist if the main node has been deleted. For example, if a customer record references several sales representatives managing the account, deleting a customer doesn’t remove the sales representatives.

Every node record has technical properties, such as creation and update date-time and user, but also a unique identifier (long integer) called _id that is assigned automatically at its creation and does not change during the life of the record.

Keys, references, and setup data

Most entities have at least one unique key called the natural key. This key is used to identify:

  • A record in a data entry flow such as an import.
  • References to other records in the referenced nodes.

The natural key can have more than one component. You can choose the names for your natural key's components. If you have a unique component for setup data, it is usually called id.

Examples of natural keys are the item ID, a customer code, a country normalized code, a document number and so on. An existing natural key is unique and can be used for searching purpose notably in the user interface. The natural key is not used to manage the referential integrity in the database structure. The referential integrity is based on a unique key value called _id. This means renaming the natural key values for records has no impact on the linked records.

When importing data, we also use the natural key to identify the references as it is the code known by the customers. For example, when importing a sales order, the customer code, the item code, and the payment terms are all identified by their external code which is the natural key. The import will transform these natural keys into the corresponding _id when updating data.

Importing records in modification requires the node to have a natural key.

Presetting records

Nodes storing presetting are usually delivered with content at the creation of an empty tenant. This allows software editors to deliver presetting that simplify customer onboarding. This is the case for units of measure, payment terms, countries, and so on.

When an update of the software happens, presetting records can be completed and updated to comply with changes in regulations. You can also add your own records in these entities. The system makes the difference between the supplied presetting and the additional records as there is a vendor code called _vendor in the GraphQL schema, in the setup entity.

When this _vendor code has the sage value, the record is a standard one. You cannot modify such a record: The id and most properties cannot be changed. There can be some exceptions depending on the entity. Sometimes you can deactivate standard setup records and change their description. The editable properties on a standard setup remain unchanged, but an upgrade of a tenant can trigger an automatic update of other properties.

Otherwise, it is a record added afterward. In this case, you can modify the corresponding key values and the other data. Applicative restrictions can apply to the changes.

During the import procedure, the reference to such a setup entity is performed with the natural key, which includes a property usually called id. Sometimes, there is more than one component.

All keys should be case sensitive. If you enter a reference to a unit of measure, the keys of the standard units provided are all upper-case. Entering Each instead of EACH will raise an error (reference not found). Currency units and enumeration follows the same case sensitivity rule.

In some cases, when an array of references exists, you can have a _sortValue present in the key.

This is, for example, the case for the addresses associated with a business entity. The _sortValue is a numeric value that starts at 10 and is incremented by steps of 10. This means that if you want to reference the third address of the JOHNDOE customer in a document, the values you need to use to make the reference are JOHNDOE and 30.

Vital and non-vital reference

References can be vital or non-vital:

  • A vital reference cannot exist without the origin node. This is the case for order lines attached to a sale order.
  • A non-vital reference record can exist even if the main node has been deleted. If a customer record references several sales representatives managing the account, deleting a customer does not mean that the sales reps will disappear.

Vital children can be unique for a record (an address on a sales order), or multiple (a collection of lines in a sales order).

Complex document and vital children

The import and export engine can, in a single file:

  • Import nodes and its vital children in cascade
  • Export the same data, and data linked to non-vital children

A file that is exported from a tenant can be re imported using the same import template. If non-vital data is exported, it will be ignored in import.

Let’s take some examples for an item record:

  • In an item import file, you can have:
    • item records
    • several item or site (vital children)
  • Then for each item or site you can have several valuation records (vital child of item or site). It can all be imported together.
  • You can also export the name, the symbol, or other information related to this unit. This is non-vital related data. It will simply be ignored during the import, you cannot create an item and its unit at the same time during an import.

This shows how the data structure will appear when defining an import template:

Data structure when defining an import template

You do not have to keep all this information when you import data. In some cases like for sales order where at least a line record is necessary, you can create files that don’t include all these groups.

Direct import of virtual children in creation and modification

In some cases, you can import directly vital children of a node for an existing record. Item and site records can be imported during an item import as a vital collection. You can also import item and sites records for existing items by using an item and site import template.

When the following conditions are fulfilled, this import is possible in modification:

  • A natural key exists directly for the vital child. This is the case for item and site, where a natural key with the item code and the site code, exists.
  • When a natural key exist for a parent record, an additional _sortValue is available to define automatically a natural key for the vital collection child.

In the case of item, customer and price vital collection, where the natural key is the item code, the customer code, and a _sortValue property. Modifying the records directly with an item, customer and price template requires to have the 3 components in the file. If you create records with the same template, the _sortValue component can be added if you want to force the value, but it is not mandatory. It will have a computed default value.

Note that a direct import of vital children is not always possible.

  • If you create a template for a child node that doesn't support import, the template will automatically be set to Export only.
  • When using a template for an import, the system will automatically disable the modification mode is a modification is impossible.
  • When an import in modification is performed, if the _sortValue is mandatory to identify the record because no other key exist, the system will send an error if the field isn't there.

GraphQL schema and use for APIs

Using GraphQL is the only way to access the relational database used by Sage Distribution and Manufacturing Operations. A user dealing with APIs or a developer has no direct SQL access to the database. This ensures security. Rights are granted on nodes and operations, securing tenant isolation.

A GraphQL operation can only access the tenant it has been connected to. No limit exists on the accessible data if a node is published.

You can use the GraphiQL tool to navigate across nodes and find them. You can access the GraphiQL tool directly from the main URL of the tenant, with the /explorer/ extension. It is the easiest way to navigate across the schema and look at the different nodes that are available. You can also find information in the documentation. The main page of this tool looks like this:

GraphQL API sandbox main page

GraphQL API sandbox main page

You can enter queries or mutations with the GraphQL syntax in the first column. The results display in the middle panel. You can access the schema directly in the right panel.

Data format definition

CSV Global structure for files to be imported

Character set

The CSV files can be encoded either in ASCII or in UTF-8 without a BOM header. This allows to import national and special characters. The fields are separated by semicolons.

Using Excel to create CSV files

When creating CSV files with Excel, you need to separate the fields wit the List separator used in Windows regional settings.

In some countries the default list separator is a comma. In European countries, a comma is reserved for the decimal symbol and the list separator is generally set to semicolon. Because a semicolon is expected for the csv files imported by the import tool, make sure your regional settings are set correctly on Wondows:

  • Open your Regional settings.
  • Click on Administrative language settings.
  • Click on Additional settings.
  • Update the List separator with a semicolon.

If you have texts with special characters such as accents, make sure you save the file in the right format.

Structure description

The CSV format has columns of data separated by a semi-colon:

  • The header line defines what the different columns of the CSV file represent:
    • For a single property, its name is given, prefixed by ! if it is the record's main key, and by * if the information is mandatory. This * or ! character is only present on the first line.
    • If a reference is done with the natural key, it is identified by the referenceName syntax, where referenceName is the property to be imported. referenceName(_id) is used if the _id value is supplied to retrieve the linked record.
    • Arrays of references can be identified by a group of data and will be explained further in this document.
  • Two additional lines can be present to describe the structure, but they are not mandatory. Additional lines that must not be considered as data are identified with the word IGNORE after the last column.
  • These 3 lines have the same format as the template file. Every additional line represents a record.

Basic format with single-line records

References, vital children, and collections in the header and lines

Frequently, the data to import is structured in header and line records. Sometimes, a single record can have several line sets and even nested line sets.

For example, if an order has order lines, every order line can have sub-lines if there are different delivery dates. This is done by splitting the total quantity of the line.

Another type of line can simultaneously be associated with the same header. For example, a list of comment lines, with each comment line being a single record.

It corresponds to a property in the main node which is an array of references on nodes containing lines. There are also cases with a unique reference to a sub-structure. For example, a sales order that contains a specific address.

The CSV format allows importing data of lines associated with header information if the lines are vital references to the main node.

You can also import the header and lines separately when the following situations happen together:

  • Lines are not mandatory in a document.
  • A creation operation exists on the corresponding node.

CSV format for the header and lines

When a vital collection is on a node, the properties of the collection's members are present on the node just after the property that represents the collection.

  • The property that represents the collection is prefixed by a #. This indicates that the following properties are coming from a child node.
  • If a sub-collection exists in a child collection, the property corresponding to the sub-collection is prefixed by ##. Every additional nesting level is defined with an additional #. You can have two collections at the same level, but nested collections must follow the group they are nested in.
  • List the main record's properties before describing any nested collection.
  • A sub-node that is not a collection starts with / instead of #. You can also nest such groups. Such sub-nodes are often addresses and contracts on business documents.

You need to place the records on different lines in the correct nesting order.

You can import a simple file like this with the same item template:

By default the standard template includes the useful column available and every additional group that can be imported at the same time.

You can create files to be imported without the columns you don’t want to import if they are not mandatory, and groups you don’t want to import, as long as you don’t need one detail record for a header.

If you keep the columns associated with a sub-group, and if there is data in there, the mandatory fields in sub-groups must be filled.

Duplicate property names

In a header, every property must be unique. But sometimes, properties can have the same name on different nested groups, for example, name, status, or amount.

When this happens, you need to add a suffix to the property, starting with a #, so the column's title is unique.

Properties format

Dates

Dates must be in the DD-MM-YYYY format where DD is the day, MM the month, and YYYY the year. The separator must be a dash.

Date ranges

You can now define a single filed as a date range in itemPrices. The format is the following:

[YYYY-MM-DD, YYYY-MM-DD] where the first date is the first boundary of the range and the second is the end of the rang. The brackets are used when the boundary of the dates are included in the range. The dates are excluded if parenthesis are used instead.

For example, [2024-01-01,2024-01-05] means from the first to the fifth of January included, whereas [2024-01-01,2024-01-05) means from the first to the fourth of January included (the fifth being excluded).

This format can be used for export and import, but when we export, the range is always given with a [YYYY-MM-DD, YYYY-MM-DD) format.

Numeric values

Numeric values must be in decimal format. If needed, the decimal separator must be a point. No separator is allowed for 3-digit groups. A minus sign is allowed for negative numbers.

For example, -12345.6 is valid, whereas 12,345.6 is not.

Enumeration

An enumeration is defined by a restricted list of choices. In the user interface, such a field appears as a combo box and is translated into the user’s connection language.

In an import file, you need to use the internal code. The second header of a standard template gives the list of codes that you can use.

For example, enum(purchasing,production) is used for replenishmentMethod in the item template.

Codes are case dependent. For example, an error occurs if you enter Purchasing instead of purchasing.

Strings

A string can include any character, except carriage returns and double quotes. If you need to manage double quotes in a string, enter the whole string in double quotes and add another double quotes to it inside the quote:

APPLE_PIE ;""Apple pie 10"" diameter ;Apple Pie ;10 

It can be imported as a 4 fields line with 3 alphanumeric line and a numeric line.

Pictures

You can add a picture in Base64 format. It then becomes a string that can reach a big size.

Boolean

A Boolean can be represented by TRUE and FALSE values, either in uppercase or in lowercase.

Localizable texts

You can store texts such as product descriptions and retrieve them in different languages as an array of texts associated with a list of language codes.

This process is almost transparent for the user. Depending on the language used for the connection, the text displays in the correct language. If the text is unavailable in the language, there is a base text with a default value.

A language code can be a two-character code compliant with ISO 639-1. It can be en, fr, de, es, it, and so on.

When you connect, you usually see a language code associated with a country code, for example, en-US, en-GB, fr-FR, fr-CA, es-ES, or es-AR.

If the language code used for connection does not correspond to an existing translation for a text, the system tries to see if there is an equivalent text associated with the two first characters of the language code. For example, the following translations are stored in the database:

  • base: Text 1
  • en: Text 2
  • fr: Text 3
  • en-GB: Text 4
  • en-US: Text 5
  • fr-FR: Text 6

If you connect with:

  • fr-FR, Text 6 is used because it is a perfect match.
  • fr-CA, Text 3 is used because the first two characters match.
  • pt, Text 1 is used because it is the default base text.

When you create a record through import with just the property name as a header, only the base text is filled. But you can create several columns in your template by adding the language code between parenthesis just after the property name.

References

References are managed by the natural key. This key must be unique. It can be a single or a composite key. When several segments exist in a key value, they are separated by the pipe (|) character.

Some setup records have a setupId property that corresponds to the natural key. It can also be an id property.

For tables that are not setup data, the natural key is usually the code of the record. For example, the document number, item code, or customer ID.

References on multiple child nodes with the _sortValue property

When a reference points to a child node associated with a main entity that does not have another unique key, a technical property called _sortValue can be in the second part of the key. This ensures having a unique key. For a given main entity key, the _sortValue values are all different.

JSON fields

Columns can be defined as JSON data. When this happens, you need to put the whole JSON field between 2 sets of double quotes to escape the usual double quotes needed in JSON.

Mandatory fields

Mandatory fields are prefixed by:

  • * in the first line of the template
  • ! if it is the record identifier

The mandatory columns must be in the file to be imported if you select the Creation checkbox and launch the import.

If you launch the import in update mode only, the mandatory columns can be absent from the file. In this case, the value does not change for the updated records. The natural key that identifies the record to modify is mandatory.

If you launch the import for creation and update, the mandatory columns must be present, even if every record present in the file already exists in the database. If values are missing in a column and the record already exists, the update is done with a null value. An error occurs if the field is mandatory. If you want to update only some columns and keep the others unchanged, use a format that does not contain them and perform an import in update mode only.

The following grid summarizes how empty values are managed for mandatory properties.

Import mode for the record A default value exists The column is present in the file A value is present on the line Result

 

 

 

 

Creation

 

Yes

 

No

 

Default value is used

 

Yes

No

Default value is used

Yes

Line value is used and checked

 

 

No

No

 

Global error

 

Yes

No

Error for the line

Yes

Line value is used and checked

 

 

 

 

Update

 

 

 

 

Yes

No

 

No update for the property

 

Yes

No

Error for the line

Yes

Line value is used and checked

 

 

No

No

 

No update for the property

 

Yes

No

Error for the line

Yes

Line value is used and checked

If a property is not mandatory in update mode, the rules are the following:

  • If a column is present in the file, the value replaces the previous one. If the column is present with no value, erase a previous value during the import. As an empty value is allowed, the system follows the rule.
  • For translatable text, importing a column that does not mention the language replaces the base value. It is recommended to indicate the language to be sure to update the right text. You can use the fieldname(base) syntax if you only want to replace the base text.
  • If a column is not present in the file, the previous value will not be modified.

For localizable texts, as we can have several columns for different lines the situation is a little bit different:

  • If there is a column that doesn't mention the language, the value will replace the base value. Add the language to make sure you update the right text. To replace only the base text, use the syntax fieldname(base).
  • If the text is mandatory, several columns are given in the file with different languages and we are in creation mode:
    • At least one of the columns must be filled. The first one will fill the base description as well as the description for the right language code.
    • If descriptions are not given for some languages, ignore them and do not set the corresponding translation to an empty string. The base translation will be used for these languages.

    In modification mode, translations not given for some languages will not change the existing translation if some existed before.

Data import procedure

This section explains how to generate templates and import data in Sage Distribution and Manufacturing Operations.

Generate the template

Open: Administration > Import and export > Import and export templates

The import tool is based on templates created by entering an ID, a name, and the node name.

To create an import or export template, select the node name and enter a name and an ID for your template. The corresponding description is filled in by default, but you can simplify it.

The Template colums grid displays the list of fields and the corresponding groups.

In this grid, you can:

  • Use the arrows to move fields up and down.
  • Add a field by clicking the Insert button and selecting a property from the group's available fields and collections.
  • Delete properties or an entire group.

Adding a field with the + button adds a property after the current one.

A field displays the CVS file header of a template that can be used to enter data. Only the first line of the template is necessary. You can select the Data type and Description checkboxes to generate the data type and description in the created CSV file. They are not selected by default. Selecting the Data type checkbox provides the list of applicable choices for an enumeration field.

You need to use a template to import a CSV file. The file that contains the data to import must at least have the first line present in its template. The 2 other lines are documentation and are not mandatory. A CSV data file can have less columns than the template if all the mandatory columns are present. The order of the fields can also be different in the file if the structure of the separate groups is in the right order. Groups of fields must be before child groups.

Select the Generate template button to download a CSV file. You can open the file, enter data, and then import the corresponding file.

If you have data on the entity you want to import, you can select a record with relevant values and export it to get a standard header and a data sample.

Import data

Open: Administration > Import and export > Import data

This page has 2 tabs:

  • Data
  • Import results

A file can contain thousands of lines that take a long time to process. So, the import performs asynchronously.

Data tab

First, upload the file to import with the Browse files button located under the Select file section.

The upload can be long, so a progression bar displays. Do not leave the page before the progression bar reaches 100% because this interrupts the operation.

At the end of the upload, you can:

  • Select the operations to perform:

    • Insert to create new records
    • Update to update existing records

    If you only select Insert, existing records return errors. If you only select Update, the non-existent records return errors. If you select both, the system creates records or updates them depending on the situation.

  • Perform a consistency check by selecting Test import. The file is read without triggering any operation. All the consistency checks are done. For example, the data format, the existence of the references, and so on. As a test import does not create records, some errors controlled by the database do not occur at this stage. For example, an undetected error can be an import in creation mode only with the same record twice in the import file.
  • Chose the behavior to adopt when errors occur. The import works record per record, including its children. If a record or one of its children is incorrect, the whole record import fails. The import task continues to import the following records if you select the Continue and ignore the error checkbox. The import stops when you reach the maximum number of records in error entered in the Maximum number of errors field. An error does not cancel the import of the valid records already imported. There is no rollback.
  • Select the template to use.
  • Select the Import button to launch the import task. A confirmation box followed by a confirmation message display.

Import results tab

On the Import results tab, you can list the files submitted with the current template and see their status:

  • Pending means that the submission is taken in account, but that the job did not start yet.
  • In progress means that the import is still running.
  • Completed means that the file was completely processed. In this case, some records can not be integrated. The Error detail column indicates the errors found. You can download the file containing the errors from the line. Refer to the Error management chapter for more information on errors.
  • Failed means that the format of the file was not correct. For example, it can be an error such as a missing mandatory column in the header, a column mentioned that does not exist in the header, or an incorrect format of the header. If this happens, a global error message displays when you select the line.

You can select the Refresh button to refresh the grid while imports are in progress.

If you leave the page while imports are still running, you can display the results of the imports in progress for a given template from its Import and export template page. The Import history tab contains the same lines.

Error management

Record-level errors

The import manages separately every group of records linked to the main entity and creates the document in a separated transaction. For example, if you import documents with 3 nested sub-levels, any error at any level of the document rejects the document, but the next document is imported if no error is detected.

When launching an import, you can decide whether the import stops or continues when errors happen. The Maximum number of errors field indicates the number of lines in error allowed before stopping the import. If you set this number to 1, the import stops at the first error, but all the documents that were correct before the error occurred are imported.

The system copies the documents if error found during the import in a separate file that has the same format as the original file. This file has an additional _error column at the end. This column gives the error message explaining why the document import failed. You can directly edit the error file, fix the data to avoid errors, and re-import the error file. The system ignores the _error column at import time. This allows you to import data in several steps, for example:

  • You can run an import with a higher limit for the number of errors, ideally until you reach the end of the file.
  • You can recycle the documents in error by fixing them and reintegrating them until there is no error.

Global errors

When there are global inconsistencies in the header, the error displays in the Results grid's Error detail column.

Select the line to view the error's details.

Error management summarized

  • The errors are managed at document level. If an error is detected on a line or a sub-line, the whole document is rejected.
  • When importing data, set up the number of errors you would accept before stopping the import. If you set this value to 1, the import will stop immediately on the first faulty document. If some correct documents were present in the CSV file in previous lines they will be correctly imported. A good policy is to put a maximum value if you are testing the import. This gives you all the erroneous lines at once allowing you to correct them before running the import for the work in production.
  • Having no error in the test mode doesn't guarantee the import in production will work. Some errors linked to the creation or updated cannot be detected in the work in production. This is especially the case if a key becomes a duplicate value because a previous record has been imported in the same file.
  • If a customer is not allowed to create sales order to a given site, importing the document on this site will create an error. The user need to have the right access rights to be able to run the import without errors.
  • The definition of a document depends on the template used. 
    • If a template linked to item and site is used, every record will be an item and site definition. Only lines linked to sites unauthorized to a given user will be rejected. The other lines assigned to authorized sites will be imported as they are autonomous documents.
    • If a template linked to item is used and includes item-site information, a line linked to sites unauthorized to a given user will trigger the rejection of the whole item. The lines assigned to authorized sites will also be rejected.
Use the item linked template if you want to create items and item-site simultaneously.

Creation mode and update mode

You can import data in creation mode, update mode, or both. If you choose to use them both, the import engine will check if the record already exists. If it does, an update will be done. Otherwise a creation will be done.

Import in creation is always possible but import in modification is only possible if a natural key exists and if the update has been implemented by the development team.

When a vital child collection exists, the natural key can be:

  • Directly defined in the child record. For example, the item and site collection, which is a vital child of the item collection, has a natural key composed of just the item and the site. These two properties are sufficient to uniquely identify a record. When you display the item and site records on the item page, the _sortValueis used to order the record. It is not part of the natural key. For item and site import in modification, _sortValue is not mandatory, although it can modified through this import.
  • Inherited from the parent with a natural key. An import in modification mode requires to have _sortValue in the import file. In this case, the _sortValue is part of the natural key of the child record. It uses the natural key of the parent and the_sortValue. This is the case for the item customer and price, it is a vital child of item and it needs a _sortValue to build a natural key. An import in modification mode requires to have _sortValue in the import file.

If these conditions are not met, the node cannot be imported in modification mode. The conditions includes being an autonomous node without a natural key or not being an autonomous node or being a vital child without a natural key of a node that also has no natural key.

When using a template for an import, the system will automatically disable the modification mode if a modification is impossible. If the_sortValue is mandatory to identify the record because no other natural key exist, the system will send you an error when an import in modification is performed.

Manage templates

Open: Administration > Import > Import and export templates

Creating a template is easy, as an automatic template generation is done by defining the entity to import.

When a creation happens, the Template columns section of the record is filled in by default.

If custom fields exist on the node, they appear in the template as normal fields with their name. In the CSV file, they appear in the following format: _customData(propertyName) where propertyName is the technical name given to the custom property.

Select the Generate template button to open a dialog box with your template. Select the CSV file's name to download it on your PC. You can then create data inside this file. You can use the template as it is and open it with Microsoft Excel.

You can also delete non-mandatory columns. If you do so, the created records have a default value depending on the business rules associated with the imported entity. In case of modification, they remain unchanged.

Managing templates with options

Some default templates such as stock entries and purchase receipts can include additional groups such as serial number details or lot details. The group related to serial numbers is only present if the corresponding serialNumberOption option is active on the Service options page.

As this option is not active by default, the provided default template does not include the data group related to non-active options. As these templates are provided as factory ones, you cannot modify them. You need to create a new template for the node you want to import and use this template. By default, it contains all the possible columns. So, if you generate your template, you will get probably more columns than what you need. You can then open your template in Excel and delete the columns and groups you don’t need. The number of columns you need to be able to import is reduced and mandatory columns are identified.

The template can contain columns that are not present in the file. However, you cannot have columns in the file that are not present in the template. This creates errors.

Import main entities

You can import any entity that supports CRUD actions and has a natural key. So, you can import most of the entities present in Sage Distribution and Manufacturing Operations. This section highlights data imported frequently during on-boarding projects.

Accounts

Do not import accounts if your tenant is connected to SageIntacct. Only import accounts for stand-alone tenants.

Accounts are used to generate the posting entries for documents generated by Sage Distribution and Manufacturing Operations. They are identified by a 2-part key: id and chart of account. These are references to a setup table provided with values such as US_DEFAULT, GB_DEFAULT, FR_DEFAULT, ZA_DEFAULT. These correspond to charts of accounts used for the different legislations covered by Sage Distribution and Manufacturing Operations.

You can import accounts in creation and update mode.

In the following import file example, you can enter 2 sub-lines: Attribute types and dimension types.

Posting classes

Posting classes are categories used to define on which accounts the posting is done for entries linked to entities such as items, customers, suppliers, and so on.

You can import posting classes in creation and modification mode.

Companies

Companies are the legal entities that regroup sites. Two groups of data are present and mandatory to create companies with an import:

description *legislation *chartOfAccount siren naf rcs legalFrom *country *currency sequenceN
sting reference reference string string string enum(SARL) reference reference string
description legislation chart of account siren naf rcs legal form country currency sequenceN
British Company GB GB_DEFAULT         GB GBP GB
American Company US US_DEFAULT         US USD US
American Company US US_DEFAULT         US USD US
French Company FR FR_DEFAULT 123456789 12.34Z RCS Lyon SA FR EUR FR
South African Company ZA ZA_DEFAULT         ZA ZAR ZA

The addresses associated at sub-level with the American company having 2 addresses:

#addresses *isActivated#1 *name#1 addressLine1 addressLine 2 city region postcode country *isPrimary  
collection boolean   string string string string reference boolean IGNORE
addresses is active(true name address line1 address line 2 city region postcode country is primary(false/true) IGNORE
1 TRUE C1-HEADQUARTER 3, Scary street   SOUTHAMPTON   GB TRUE  
1 TRUE MAIN PLANT 4, Washington avenue   SEATTLE   US TRUE  
2 TRUE C3-HEADQUARTER 10, Jefferson Road   MOOREHEAD   US FALSE  
1 TRUE C4-HEADQUARTER Rue des Fleurs   GIVERNY   FR FALSE  
1 TRUE C5-HEADQUARTER 5, Mandela Square   CAPE TOWN   ZA TRUE  

The contacts must be imported separately since version 42, associated to a CompanyContact node.

Contacts has the company address as reference and uses a two-part key. The company code and a numeric sort value (sortValue). During company import, addresses are created with the sequence 10, 20, 30 and so on. If addresses are created or imported separately, the rules are more complex. Exporting the data can tell you which values are used _sortValues.

After importing 4 companies and their associated address, running the extraction of these 4 companies can be done from the company main list.

!id #addresses !_sortValue name addressLine1 addressLine2 City locationPhoneNumber isPrimary
C-FR   10 C4-HEADQUARTER Rue des Fleurs   GIVERNY   TRUE
C-UK   10 C1-HEADQUARTER 3 Scary Street   SOUTHAMPTON   TRUE
C-US   10 C3-HEADQUARTER

4 Washington avenue

  SEATTLE   TRUE
C-US   20 MAIN PLANT 10 Jefferson road   MOOREHEAD   FALSE
C-ZA   10 C5-HEADQUARTER 5 Mandela Square   CAPE TOWN   TRUE

The first address created for every company had 10 as a sortValue, the second address had a 20 _sortValue.

Item categories

Items can have categories such as Food or Chemical.You can import a unique group of data for item categories.

Items

You can import items by creating a template associated with the Item entity. This imports the following entities into a simple file:

  • Item contains the main information related to the item: name, description, type (good, service, food, or chemical), type of management (sold, manufactured, or purchased), default units, traceability information, default sales, purchase information, and accounting information.

  • ItemAllergen contains the list of allergens associated with the product for food.

  • ItemClassification contains the list of chemical classifications associated with the product for chemical products.

  • ItemSite contains item information per site such as default quantities, reordering policies, and the valuation method.

  • Valuation is a sub-level of detail per item and site. You cannot enter data on it even if it is present in the template.

As the groups related to allergens, classifications, and itemSites are on the same level, you need to create them on different lines. There is no dependency between them. The item part remains on every line. These groups are optional and you can import them separately, linked to the ItemAllergen, ItemClassification, and ItemSites. This allows you to import items with only one group of data.

You need to import some data groups either separately or associated with other entities. It is the case for ItemCustomer, ItemSupplier, ItemCustomerPrice, ItemSiteSupplier, and ItemSiteCost entities and ItemPrice records.

If you import items while the Sage Intacct integration is running, make sure you import the name and description of your item in English. If you don’t indicate the language, it will integrate with a default language that will display on Sage Distribution and Manufacturing Operations screens. The English name and description won’t be present in the database. Empty names and descriptions will be sent to Sage Intacct and trigger integration errors.

Item sites

You can import an item site separately from the item import. You can do this in creation and update mode.

The item reference and the site reference fields are mandatory.

You cannot:

  • Use the value sub-group.

  • Import the stock valuation method. You can use the Item stock cost import to define costs per item.

In the following example, only the first three fields are mandatory.

!item

!site

valuationMethod

prodLeadTime

safetyStock

batchQuantity

replenishmentMethod

reorderPoint

preferredProcess

APPLE_PIE

S02-FR

standardCost

10

20

20

byMRP

100

purchasing

CHERRY_PIE

S02-FR

averageCost

11

25

50

byMRP

100

purchasing

BANANA_PIE S02-FR fifoCost 12 30 100 byMRP 100 purchasing

Item site costs

The default cost of items used for the stock entries is stored in the Item site cost record.

The key related to the item site is a two-component key. Its fields are separated by a pipe (|). The first part of the key is the item code and the second part is the site ID.

Business entities sites, customers and suppliers, addresses and contacts.

Data structure description

Business entities are the common denominator for sites, customers, and suppliers. You can import a business entity that is a site, but also a business entities, customers, sites, and supplier.

The structure of the contacts and the addresses is the following: addresses and contacts are collections at the same level in the business entity, and the contact has a link to the address. This means that you cannot import in creation mode both addresses and contacts of a business entity. You must import them in two times.

On the other hand, customers, suppliers, and sites can be imported in a single file with the business entity information. Here's the structure summarized.

Data Structure Description

Data structure summarized

Depending on how you want to import the different entities, you can use templates associated to several nodes and import at the same time vital references.

This table shows which template you can use to import the different entities above. Entities in italic are optional:

Data that can be imported Template based on Possible modes
Business entities, addresses (at least 1 main address) BusinessEntity Creation/update
Contacts for an existing business entity, site, supplier or customer BusinessEntity Update only
Addresses for an existing business entity BusinessEntityAddress Creation /update
Customer + Business entity, addresses, delivery address, Intacct customer, item-customer references BusinessEntity Creation /update
Supplier + Business entity, addresses, delivery address, Intacct supplier, item-supplier references BusinessEntity Creation /update
Site + Business entity, addresses, delivery address BusinessEntity Creation /update
Customer + Supplier + Site + Business entity (business entity with several potential roles), and previous entities listed in the 3 cases above BusinessEntity Creation /update
Customer (for an existing business entity), Item customer Customer Creation /update
Item customer information (for existing customers) ItemCustomer Creation /update
Supplier (for an existing business entity), Item supplier Supplier Creation /update
Item supplier information (for existing customers) ItemSupplier Creation / update
Site (for an existing business entity) Site Creation /update

You can create the supplier, site, and customer with its associated business entity by using a Business entity based template. The supplier, site, and customer templates are used when you import the business entity, and then the supplier, site, and customer details separately.

The contacts are added after using the update mode on the business entity.

Addresses, item-customer, and item-supplier information can be imported both separately by using a simpler template and at the same time with the main records.

Simplified Business Entity templates such as business entity site, Business entity customer, Business entity supplier, and Business entity address are provided to avoid having too many properties in it.

Business Entities, Sites, Suppliers and Customer templates

The node associated to these import templates can be BusinessEntity. You can create templates associated to Customers, Suppliers, and Sites if you want to create a template dedicated to a type of business entity when it already exists.

Creating a new template associated to BusinessEntity provides you a template with 189 properties, with custom fields and non vital properties not included. The previous nodes (Customer, Site, Supplier) still exist and can be used to import a customer, a supplier, or a site when the corresponding business entity has already been created.

We provide different templates in create and update mode. They are associated to the BusinessEntity node with some of the properties:

  • BusinessEntity is a template dedicate to import business entities alone:
    • with addresses
    • with delivery details when you import entities separately as costumers
    • with contacts associated to addresses on existing business entities in update mode.
  • BusinessEntityAddress is a template dedicated to import business entities addresses alone with delivery details if wanted. It is recommended to import Business entities and addresses together.
  • BusinessEntitySite allows to import business entities and site together.
    • The main record is the business entity.
    • The site information is in a separated group at the same level.
    • You can add addresses as a detail group.
    • The contacts is possible on existing sites in update mode.
  • BusinessEntitySupplier allows to import business entities and suppliers together:
    • The main record is the business entity.
    • The supplier is in a separated group but at the same level.
    • You can add addresses as detail group.
    • On existing supplier, the contacts are possible in update mode.

This template doesn’t include all the related supplier information such as item and supplier information.

When creating a unique address you can import business entities, suppliers, and supplier and item information in a single file.

You cannot have on a single business entity record with multiple addresses and multiple items associated with a given supplier. Instead you can:

  • Create suppliers with a single address per supplier and many item and supplier records per supplier, and then import other addresses by using the BusinessEntityAddress template.
  • Create suppliers with multiple addresses, and import Item and Supplier records by using the ItemSupplier template.

You can also import separately Business Entities and Suppliers by using the corresponding templates BusinessEntity and Supplier.

  • BusinessEntityCustomer allows to import together business entities and customers together:
    • The main record is the business entity.
    • The customer information is in a separated group at the same level.
    • You can add addresses as a detail group.
    • The contacts will be possible in update mode on existing customer.

This template doesn’t include all the related supplier information such as item and supplier information.

When creating a unique address you can import business entities, suppliers, and supplier and item information in a single file.

You cannot have on a single business entity record with multiple addresses and multiple items associated with a given supplier. Instead you can:

  • Create customers with a single address per supplier and many item and customer records per customer, and then import other addresses by using the BusinessEntityAddress template.
  • Create customers with multiple addresses, and import Item and Customer records by using the ItemCustomer template.

To summarize, in a single file, we can import simultaneously:

  • A business entity and all its addresses.
  • The corresponding site if the business entity is a site.
  • The corresponding customer data if the business entity is a customer, with sub-levels such as item-customers if a unique address has been created.
  • The corresponding supplier data if the business entity is a supplier, with sub-levels such as items-suppliers if a unique address has been created.

The contacts must be imported separately even if they can be seen if you create a template associated to business entity. You cannot express the link with a record that doesn’t exist yet. Any template associated to business entity can be used in update mode to create the contacts and attach them to the right address.

Business entity alone

Here is an import file that creates a business entity that is not a site, nor a customer or a supplier. It has an address and delivery information that can be used when a corresponding customer is created.

Sites

Importing sites is not recommended if you are connected to Sage Intacct. You need to manage sites manually because there are some constraints. Financial sites must be a Sage Intacct entities and non-financial sites must be locations in Sage Intacct.

However, if you need to import sites, the import has the following characteristics:

  • The import is available in creation and update mode.
  • The link to a business entity is mandatory. You need to import the business entity first and then import the site as a business entity by giving the corresponding code in the *businessEntity column.
  • The isActive, isFinance, isPurchase, and isManufacturing columns are not mandatory. Their default value is TRUE for a site creation. The isLocationManaged column is also not mandatory and its default value is FALSE for a site creation.
  • If the isFinance column is set to FALSE, the financialSite column becomes mandatory and requires a financial site belonging to the same company.

If you import sites alone after having created the business entity, you file will have a unique level of data.

Business entity addresses

You can import business entity addresses separately from the business entities in read or update mode.

The key used for a business entity address has 2 components that can be used later if another file refers to such an address. The first part is the business entity's ID and the second part is the _sortValue technical property. When addresses are imported separately, an algorithm assigns this second part randomly.

The format of a business entity address is based on 2 levels that are the address and a _sortValue.

The reference to an address in other imported record will use the business entity key and a _sortValue.

When the addresses are imported separately, the _sortValue is assigned with an automatic but unpredictable value.

Importing contacts is done in a separate file imported in update mode on a business entity-based template

Customers

Customer import has the following characteristics:

  • The import is available in creation and update mode.
  • The link to a business entity is mandatory. You can either import simultaneously the business entity and the customer or proceed in two steps using the business entity template first, and the customer template after.
  • At least one delivery address is mandatory. The ship-to address refers to the addresses entered on business entities. You need to use the business entity key followed by a pipe (|) and the _sortValue key of the address. If the import has been done at once, it is the rank of the address in the business entity multiplied by 10. For example, creating a customer associated with the ABCD business entity and a delivery address that is the third requires the ABCD|30 value in the shipToAddress column.
  • The payment terms and delivery mode are mandatory. Refer to the appendix for the standard available codes.
  • You can import item and customer information at the same time. It is at the same level of nesting as the addresses. Therefore, the item group can be before or after the address group.

Item customers

You can import item customers in a separate step. This import is available in creation and update mode.

The import data looks like the following example.

*item

!customer

id

name

*salesUnit

*salesUnitToStockUnitConversion

minimumSalesQuantity

maximumSalesQuantity

APPLE_PIE

CUS-000001

Apple Pie

Apple pie

EACH

1

10

200

CHERRY_PIE

CUS-000001

Cherry Pie

Cherry pie

EACH

1

10

200

Suppliers

Suppliers import has the following characteristics:

  • The import is available in creation and update mode.
  • The link to a business entity is mandatory. You need to import the business entity first and then import the supplier as a business entity by giving the corresponding code in the *businessEntity column.
  • The rules for the primary address, default bill-by address, pay-to address, and return-to address are the same as for the customer. You need to enter the reference to the business entity followed by a pipe and by 10 times the rank of the address if the _sortValue has been set properly. Only the primary address is mandatory.
  • The certificates and the item and supplier information groups can be present, but they are not mandatory.

Several examples are provided in the ready-to-use CSV file provided, because there are several ways to import data:

  • Business entities first, followed by suppliers related to the business entities.
  • Business entities and suppliers in a single step, with one address only and many supplier and items records.
  • Business entities and suppliers with many addresses in a single step.

Having in the same file many addresses and many items for a single supplier is not yet possible.

Items suppliers

You can also import items suppliers in a separate step, with the import available in creation and update mode.

This gives us the following example of import data:

*item !supplier supplierItemCode supplierItemName supplierPriority isDefaultItemSupplier *purchaseUnitOfMesure minimumPurchaseQuantity purchasedTime isActive
APPLE_PIE SUP-000001 A-PIE Apple Pie from Leeds 1 TRUE EACH 10 1 TRUE
CHERRY_PIE SUP-000001 C-PIE Cherry pie from York 2 FALSE EACH 12 3 TRUE

Location type, zones, and locations

These three entities are inter-dependent, and very simple to import. First, you need to import the location types:

!lid description name *localisationCategory
BULK Bulk locations BULK Internal
DOCK Dock locations DOCK dock

Then, you can import the location zones, that are linked to the sites. It is as the previous a single level import:

!site !lid *name *zoneType
S02-FR ZONE01 First zone secured
S02-FR ZONE02 Second zone restricted

Finally, you can import locations. Once again, it is a single level import, but there is a two-part key called Location-Zone that is mandatory, and made with two parts, the site and the location code.

A location type is also mandatory:

!lid !locationZone *name isActive dangerousGoodAllowed *locationType volumeAllowed weightAllowed storageCapatacity
LOC-001 S02-FR|ZONE01 First location TRUE TRUE BULK 100 1000 1000
LOC-002 S02-FR|ZONE02 Second location TRUE TRUE DOCK 120 400 2000
LOC-003 S02-FR|ZONE02 Third location TRUE TRUE DOCK 300 1890 2400
LOC-004 S02-FR|ZONE01 Fourth location FALSE FALSE BULK 400 2300 3450

Bills of materials

You can import bills of materials only in creation mode.

There are two nested levels, the header and the components.

The item reference and the site reference fields are mandatory in the header. For the components, the component number, the item, the unit, and the link quantity fields are required.

The import file looks like the following example.

*item

*site

name

status

baseQuantity

#components

componentNumber

lineType

item

name#1

unit

isFixedLinkQuantity

linkQuantity

scrapFactor

instruction

APPLE_PIE

S01-FR

Apple pie

availableToUse

10

1

10

normal

APPLES

Sliced apples

EACH

TRUE

30

2

APPLE_PIE

S01-FR

Apple pie

availableToUse

10

2

20

normal

PASTRY

Puff pastry

EACH

TRUE

10

3

APPLE_PIE

S01-FR

Apple pie

availableToUse

10

3

30

EGGS

Eggs

EACH

TRUE

30

1

Shift details

You need to import shift details to be able to import routings because they are referenced in routing resources.

The format is like the following example.

*id

name

*shiftStart

*shiftEnd

AFTERNOON

Afternoon shift

13:00

21:00

Daily shifts

After importing shift details, you can import daily shifts that refer to a collection of shift details.

The format is like the following example.

*id

name

isFullDay

#shiftDetails

*shiftDetail

FULL_AFTERNOON

Full afternoon

FALSE

1

AFTERNOON

Weekly shifts

After importing shift details and daily shifts, you can import weekly shifts that refer to one or several daily shifts.

The format is like the following example.

*id

*name

isFullWeek

mondayShift

tuesdayShift

wednesdayShift

thursdayShift

fridayShift

saturdayShift

sundayShift

AFTERNOON_4DAYS

Week 4 afternoon

FALSE

FULL_AFTERNOON

FULL_AFTERNOON

FULL_AFTERNOON

 

FULL_AFTERNOON

   

Capability levels

You need to import capability levels when dealing with technical data.

The import file looks like the following example.

*id

*name

*description

level

CAP1

Standard

Standard capability

10

CAP2

Experienced

Advanced capability

20

CAP3

Expert

Expert capability

30

CAP4

Genious

Unique capability

40

Resources

Sage Distribution and Manufacturing Operations manages labor, machine, and tool resources. Each type of resource has common and specific information. Therefore, there are 3 distinct entities to manage this import, but a resource has a unique key.

Group resources

Group resources have 3 groups of information: the header, the resource cost categories, and replacement groups.

Labor resources

Labor resources have 3 groups of information: the header, the capability details, and the resource cost categories.

The import file looks like the following example.

As the capability details and resource cost categories are both first-level lines of the header, they are on different lines.

Tool resources

Tool resources have 2 groups of information: the header and the resource cost categories.

Machine resources

Two data groups are present in the template: the header and resource cost categories.

The minCapabilityLevel uses a reference based on the key. In the following example, the key is CAP3.

Routings

You can import routings. There are 4 groups of data:

  • The header
  • Operations
  • Resource groups
  • Additional sub-resources

Initial stock entries

Importing stock entries is the recommended method to enter initial stock values.

From the Management tab on the Company page, turn the Stock posting toggle off. If you don’t do this, you will trigger the posting of all the movements in Sage Intacct. This will cause issues because you are importing existing stocks that have already been accounted.

The import does not update the stocks. You need to post your stock entries afterward.
You can create a single document per site, with all the items on it. Do not post the result to Sage Intacct at this stage.
When the import and the stock update are finished, turn the Stock posting toggle back on. Otherwise, the following regular stock entries will not generate any posting and will be lost.

You can import simple entries such as the header, lines, and sub-lines in creation mode only by using a StockReceipt template with the first three levels. The fourth level is a technical one that you cannot import.

The following characteristics are mandatory:

  • The stock site
  • The item code
  • The quantity in stock unit

You can omit the document number if a sequence number has been assigned to the document.

You can add to the previous file a column called storedDimensions. This is a JSON column that contains up to 3 different dimensions in the following format: "{""dimensionType01"":""500"",""dimensionType02"":""300""}".
The rule is to enclose the whole JSON field between double quotes and to escape these double quotes needed for the JSON field by doubling them. The JSON property to import here is {"dimensionType01":"500","dimensionType02":"300"}.
You can import serial and lot number details in a stock receipt. This adds another sub-group of data in the stock details.
The default template does not list the Serial number data group because the Serial numberoption is deactivated by default.
If you want to import serial numbers, you need to turn the Serial number toggle on in your tenant. Then, you can use the StockReceiptSerialNumber template and select the Reset grid button at the top of the grid. By doing so, the StockReceiptDetail->StockDetailSerialNumber group displays in the grid.

Serial number and lot number import stock entries

The serial number and lot number details are given in the corresponding groups.

The existingLot and lotCreateData fields are still in the template but obsolete. You can delete their columns in your files.

Sales orders

You can import sales order in creation mode only. The structure has multiple nested groups, particularly single-line ones if you want to enter dedicated addresses.

The following example shows the minimum fields necessary to import a sales order.

All the other fields can be defaulted by the system, mainly from the customer and the item.

*soldToCustomer

*salesSite

*requestedDeliveryDate

#lines

*item

*quantityInSalesUnit

 

reference

reference

date

collection

reference

decimal

IGNORE

sold to customer

sales site

expected delivery date(yyyy-MM-dd)

lines

item

quantity in sales unit

IGNORE

CUS-000003

S01-FR

2023-02-01

1

APPLE_PIE

130

 

Purchase orders

You can import purchase order in creation mode only. The structure has multiple nested groups, and especially single-line ones if you want to enter dedicated addresses.

There are twelve groups of data and 291 fields by default. But only the header and lines are necessary.

The following file shows the minimum fields that are necessary to import purchase orders:

*site *supplier #lines *item quantity grossPrice
site (#id) supplier (#businessEntity) line item (#id) quantity gross price
S01-FR SUP-000004   1 APPLE_PIE 10 20
S01-FR SUP-000004   2 CHERRY_PIE 15 20
S01-FR SUP-000004   3 BANANA_PIE 20 30
S01-FR SUP-000001   1 APPLE_PIE 50 18

Stock count

Managing stock counts can be done in two ways:

  • You can import a file in creation mode, containing all the stock count details. Import a file with a minimum list of properties, the document number can be left blank and the status should be set to counted. Then validate the imported stock count document to update the stock levels.
  • You can create stock a count document, export it then modify the corresponding CSV files to enter the right stock values, and then reimport it in production. You need to set-up the status to counted in your file. Then, validate your stock count again.

A stock count standard template is provided, it can be used to import data in creation and update mode. If you recreate this template, make sure some of the information in the header like the criteria used to select the lines to be counted are not in the file you import . This might generate errors when controlled.

Stock count in creation mode

If you perform stock counts, you can import the result, but you must take care of several points:

  • A stock count is imported in creation only. If you generated a stock count document first in the system and you export the result, importing it will create new document that you will be able to post. You can delete the first one.
  • The import template is not delivered. Create it with the default columns, even if they aren’t all mandatory.
  • Status changes cannot be done by a count, but the added quantity can have a different status (see line 3 below).

The lines that must be present are the following ones:

number *description *stockSite status effectiveDate lastCountDate
  Count1 S02-FR counted 2023-12-15 2023-12-15
  Count1 S02-FR counted 2023-12-15 2023-12-15
  Count1 S01-FR counted 2023-12-15 2023-12-15
  Count2 S01-FR counted 45275 45275

A stock count document is created for every line set that are identical. In the example, 3 documents will be created because the site differs on line 3 and the description was changed on line 4. The status counted is mandatory to make sure the system considers the count has been done.

#lines stockStatus *item location quantityStockUnit newLineOrderCost countedQuantityInStockUnit
1 A APPLE_PIE LOC-018|S02|S02-UK|ZONE11   12 10
2 A CHERRY_PIE LOC-017|S02|S02-UK|ZONE11   23 14
1 A APPLE_PIE LOC-011|S02|S01-FR|ZONE08   15 12
2 A CHERRY_PIE LOC-010|S02|S01-FR|ZONE11   32 13

The detail lines must include the expected quantity and the counted quantity, as well as the status and the location if needed.

A unique sub-detail can be given per line. It must repeat the location, the site, the status, the stock unit, the item, and the difference between the expected quantity and the counted one.

The validation of the imported stock count documents will update the stock and the valuation.

If you manage lots and sub-lots, you can enter existing lots on different locations or create new lots. In this example, the first line creates a new lot with lot#1 being empty, and the other lines are existing lots that weren’t present at the selected places.

//stockDetail lot#1 lotNumber supplierLot expirationDate sublot
1   AB28 FAB28 2025-03-12 005
1 CHERRY_PIE|AB21|024        
1 APPLE_PIE|AB18|005        
1 CHERRY_PIE|AB21|024        

Stock count updates mode

The same import template can be used, but when importing a stock count that has already been created, the number field becomes mandatory. The most usual case is to create a template, to export it, to modify the counted values and then to import it again.

You can create new lots when an import is performed in both creation and update and to enter existing lots on different locations. But entering a new serial number during a stock count is impossible either with the import or a direct entry. You will have to go through a stock entry.

Item supplier prices

It is possible to import in creation and update mode the itemSupplierPrice node that stores for a given site a given supplier, a given item, a range of date and quantities, a unit price and a price type (enumeration). The site is not mandatory, and date range uses the range format.

Every line creates a different record in the item supplier price tab. The system updates the records in this table by using the natural key based on item, supplier, site, unit, currency, type, validity date range and quantity range. You can update a record by changing the value, but not the date range nor the quantity range. Doing so will create a record, but if the date ranges or quantity ranges overlap, there will be an error.

Roles

You can import roles in creation mode only.

Report templates

Reports and report templates can be imported. Generating manually a file that corresponds to a report template is barely impossible, as such a file contains complex JSON structures; but you can export a template from a tenant and re import it in another tenant.

Some precautions must be taken:

  • Import is available in creation mode only.
  • The report template has a mandatory reference to a report, so you need to import the report first if it doesn’t exist on the destination tenant.
  • The report has also a reference to the report template, which can lead to an issue, but this reference isn’t mandatory. If you delete the reference before exporting the report, you will be able to import the report first, and the associated template afterward.

Work orders

The work-order number cannot be imported. If you create a template from zero, this field will be present in the template and even marked as mandatory. Make sure the field is deleted before saving this template, otherwise the import will fail. There are also _sortValues properties for every collection of sub-lines, and they must also be deleted from the template to prevent the import to fail.

The standard templates provided as presetting have been cleaned-up from these properties.

Work orders operating tracking

Operation tracking can be imported in creation mode only.

You can import in a single line the different consumptions (time, quantity, and a flag that tells you if the tracking is complete. It looks like this:

A header with the reference of the sales order, but no key for the tracing record:

*workOrder Site entryDate effectiveDate
S01-FR|WO2400001 S01-FR 2024-10-16 2024-10-16

N lines associated with the header (no _sortValue column):

#lines line *workOrderOperation trackingType completedQuantity *actualResource setupTimeUnit runTimeUnit actualSetupTime actualRunTime complete
  1 S01-FR|WO240001|20 WO 25 JOHNDOE|S01-FR MINUTE MINUTE 2 300 FALSE

The same limitation applies to this import: number and _sortValue must not be present in the template nor in the file. The numbering is automatic.

Appendix: File description and examples

A set of dependent files are supplied as working examples that can be imported in an empty tenant. It works also with a not empty tenant, but you might get errors if you try to recreate existing records.

By default, a standard tenant doesn’t activate serial numbers. The examples given below, associated by templates ending with 1 are dealing with serial numbers . If you want to test them, it is important to:

  • Enable serial numbers option if you want to play them all.
  • Create the templates where serial number fields are present (they aren’t by default). In the table below, the template name is ending by a 1 when this happens. If you want to test almost every case without activating the serial number option and without creating additional template, just use the file 13-item-Noserial.csv with the standard template Item instead of the file 13-Item.csv, and skip the 41-Stock-Receipt-Serial-Numbers.csv file.
Some files must be imported in update mode. Trying to import them in creation mode will fail because it might create duplicate records or some mandatory keys for creation will be missing.

 

 

Appendix: Change log and previous releases

A refactoring has been done between V42 and V44 on the business entities making import of customer, supplier, and site data easier. If you have import files related to business entities, addresses, contacts, sites, customers, or supplier in release prior to these ones, you will need to reorganize them. Read carefully the related sections.

 

Evolutions in V44 and V46:

  • For the itemSite block in item import files: unused lastValuation has been suppressed.
  • For the itemSite records: the valuationMethod that can take the values standardCost, fifoCost or averageCost are now mandatory.
  • For items used in update mode: You can modify allergen and chemical properties. A natural key has been added on the corresponding records associated to a _sortValue column. If the groups are present in your CSV file you need to add a !_sortValue column in the #allergen group, and a !_sortValue#1 column in the #classification group. Nothing changes if you don’t have the corresponding groups in your CSV file in creation mode.
  • In V46: importing reports and report templates is now possible.
  • Importing role has been added. The template isn’t supplied as a standard one, but it can be created.

Additional evolutions in V48:

  • Two new templates were supplied (work order and operation tracking), as well as corresponding working examples.

Additional evolutions in V49:

  • Several templates have been added for export purpose only. It allows to export data from inquiries that have been reorganized as main lists. It includes purchase order lines, sales order lines, stock details, stock journal, and stock receipt.
  • Stock counts can now be imported in creation and in update mode. A new example is supplied to show how existing lots can be created on new places and how new lots can be created directly during a count.