Data Sources automatically acquire or receive timeseries data using a variety of different transport options. Connect to a data logger or collect data from files. Data Sources can be created inside Locations only.
The type of Data Source (data logger or file) is selected at time of creation and cannot be changed. This section is specific to file Data Sources.
Note
Eagle.io supports the acquisition and storage of up to 20000 records per Data Source per day. Exceeding this limit will trigger an Overload Alarm on the Source. Refer to Historic Data Limits for more information.
Configure properties via the Workspaces Tree context menu or List View properties icon. Requires configure permission.
The general section allows you to specify the type file(s) you would like to collect.
Select the type of file(s) you would like to collect. The file type must be selected during datasource creation and can not be changed.
Delimited Text
Data is transmitted as rows of values separated with a specific delimiter character.
JSON Time Series
Data is transmitted in the JSON Time Series format. Note: JTS files to be acquired must contain a columns header that specifies name and dataType of each column included in the data.
Custom Format
Data is transmitted as binary or text data (in any format) and transformed by a Converter into JSON Time Series format. A Converter can be constructed and submitted for approval via the Converter repository. Once accepted, your custom format will appear in the File type list. Note: the eagle.io team can assist with developing a Converter for your custom format.
Select how acquired data will be written:
Merge and overwrite will insert the acquired data into the existing historic data and overwrite the existing values when timestamps match.
Merge and preserve will merge the acquired data into the existing historic data and will not overwrite existing values when timestamps match.
Replace existing will remove all existing historic data within the range of data being acquired.
Configure how you would like to connect to your file(s).
None
Select this option when data will arrive exclusively via the HTTP API or manually imported.
Download from FTP site
Connect to an external FTP Server to acquire data files. Note: FTP is supported in passive mode only.
- FTP host
- Host name or IP address of the FTP Server.
- Port
- TCP Port to use for connection to the FTP Server. Default Port 21 for standard FTP/FTPS or 990 for FTP with Implicit SSL.
- Mode
- FTP Secure connection options. Use FTP for standard connection or choose an available FTPS mode. Consult your network administrator to verify the settings required for connection to your FTP Server.
- Remote path
- Specifiy the base remote path on the FTP Server that you have accesss to. You can retrieve files from sub-directories within this path.
- User
- Specifiy the name of the user account for the system to use when connecting to the FTP Server. Enter anonymous if no user account is required.
- Password
- Password for the associated user account (or leave blank for none).
Download via HTTP
No configuration required.
The File URL (when adding file) must start with http:// or https://. Optionally specify the login username and password as part of the URL if HTTP Basic authentication is required. eg. http://user:password@company.com/file.csv.
Parts of the URL can be dynamically generated by including JavaScript expressions surrounded by double braces (
{{ expr }}
).Processing & Logic Global Variables can be referenced, along with the
T( expr )
function for time manipulation.An example URL with expressions to request the most recently acquired data through till the most recently available:
http://data.com/?start={{T(SOURCE.currentLocalTime).format('YYYY.MM.DD')}}&end={{T(NOW).format('YYYY.MM.DD')}}Outputs a URL similar to
http://data.com/?start=2015.01.01&end=2017.12.01
Upload via HTTP
Data can be submitted as an HTTP POST body to the automatically generated URL. Password is optional.
- IP whitelist
- You can optionally restrict incoming connections to this source to a list of approved IP addresses specified using CIDR notation. eg. 192.168.7.52/32 Leave empty for no IP address restrictions.
Note: Please consider using the API for more options uploading via HTTP.
Email to eagle.io
Email data using the auto-generated email address exactly as shown.
- Sender address filter
- For added security you can filter by sender email address. Restrict to a specific email address or to a specific domain. eg. user@company.com or @company.com. Leave blank for no restriction.
Note: The maximum accepted size per email (including all data files) is 25MB.
Send SMS text message
Send a text message to one of our incoming SMS phone numbers:
- Australia:
- +61 488 811 086
- United States:
- +1 408 400 3928
- Custom phone numbers:
- Please contact us to arrange a custom incoming SMS number to be exclusively associated with your account. This will allow you to choose your own Source Id.
- SMS messages must contain exactly 2 lines with the following content:
- Source Id
- Data payload
Publish to mqtt.eagle.io
Publish data with MQTT using the following settings:
- Broker address
- mqtt.eagle.io
- Broker port
- Use port 1883 for standard connection or port 8883 for SSL.
- Topic
- Use the auto-generated topic exactly as shown. eg. io/eagle/source/fruit-honey-jacket
- MQTT Password
- Optional password (leave blank for none).
- IP whitelist
- You can optionally restrict incoming connections to this source to a list of approved IP addresses specified using CIDR notation. eg. 192.168.7.52/32 Leave empty for no IP address restrictions.
Note: The maximum accepted size per mqtt message is 10MB.
Read from Amazon S3
Connect to an Amazon S3 bucket to acquire data files.
- Bucket
- Unique S3 bucket name. eg. my.aws.bucket
- Access key Id
- User access key Id (generated from AWS Console).
- Secret
- Secret token associated with access key.
Note: Matching files will be removed from S3 after acquire.
Read from Dropbox
Dropbox is used to connect to a Dropbox account. An eagle.io folder will be created in your Dropbox Apps directory where you can place files for collection.
![]()
When changing the account, a popup window will be displayed which allows you to login to Dropbox and authorise access as shown above.
Upload to ftp.eagle.io
FTP your files to ftp.eagle.io using the auto-generated user name exactly as shown. Password is optional. Use Tcp port 21 for standard Ftp and Tcp port 990 for Implicit SSL.
- IP whitelist
- You can optionally restrict incoming connections to this source to a list of approved IP addresses specified using CIDR notation. eg. 192.168.7.52/32 Leave empty for no IP address restrictions.
Note: Only one concurrent ftp connection is allowed per Source. The maximum accepted size per file is 100MB.
Upload to sftp.eagle.io
SFTP your files to sftp.eagle.io using the auto-generated user name exactly as shown. Password is optional. Use Tcp port 22.
- IP whitelist
- You can optionally restrict incoming connections to this source to a list of approved IP addresses specified using CIDR notation. eg. 192.168.7.52/32 Leave empty for no IP address restrictions.
Note: Only one concurrent sftp connection is allowed per Source. The maximum accepted size per file is 100MB.
Select how data should be read from the file:
Read all records will read in the entire file on each acquisition.
Read new records will only read new data added to the file since last acquisition.
Optionally delete files after acquire (not supported on all transports).
Collection is used to specify if and when data should be automatically collected from the Source.
The Series section allows you to assign Series from the Source to New or Existing Parameters.
You must assign Series to New or Existing parameters and set the Series for collection by ensuring its checkbox is enabled. Any Parameters assigned to Series will be disabled when the Series is unchecked for collection.
Parameters can be re-assigned to new Series at any time without loosing existing historic data.
The series icon indicates the type of parameter that will be created. Rename and Delete operations should be performed from the Workspaces Tree.
Click the Delete buttons and save to permanently remove all historic data or events for Parameters contained within this Data Source. Alternatively use the Parameter Historic section to delete historic data or events for individual Parameters.
Time allows you to configure the timezone of the Source and associated options.
The Security section displays a list of all users and groups that have access to the Node. Users with security permission can restrict user and group access to the node and its descendants by assign the No Access role.
New users and groups must be added via the Workspace Security section.
Note
The Account Owner, Administrators or Workspace users/groups that have been assigned a Workspace role with the configure permission can not be assigned the No Access role.
The Text Parser allows you to define how a delimited text file should be processed by the system including defining series to be assigned to Parameters. All scheduled collection and user acquisition requests will use the saved parser configuration to process any new data that has been appended to the file since last collection.
General settings are used to specify file format and encoding options.
The Parser extracts a sample from the beginning of the input text file and attempts to split the file into columns based on the current Column delimiter and Format.
This format should be used when each column represents a single series or sensor. Every row is considered a record and must contain a timestamp.
Input Text File
Parser Preview
This format should be used when each row (line in file) represents a single series or sensor. Every row must contain a timestamp, unique identifier for the sensor (Series ID) and a value. The parser preview will only display rows with unique Series ID’s. The sample input file may not contain all the possible Series ID’s, so you can click the Add new series button and enter additional Series ID’s as required. Ensure the correct data type is set for all series.
Input Text File
Parser Preview
In Column Series format, columns are assigned a series data type which determines how they should be processed and the type of Parameter that will be available for creation.
In Row Series format, columns are assigned as Record Time, Series ID, Value or Disabled. The column that is assigned as the Value column has individual data types assigned per row.
Use the data type drop down menu in the column/row to select a data type from the available options. Individual cells that do not match the selected series data type are displayed in RED. Hover over a cell in the Parser Preview to display a tooltip (where valid) showing how the raw data will be interpreted by the parser.
The Joins and Parser Configuration section (available from the properties icon at the top of each column or in each row of the Value column for Row Series format) is used to customize the parser for individual columns/series including joins, formatting and specifying quality.
Data values in input files are commonly split into separate fields and therefore will be shown as different columns in the Parser Preview. It is necessary to ensure each column you would like to use for Parameter creation has all joins/fields defined.
You can join additional columns via the Add join button. Select the column to join from the Joins drop down and the Field to be assigned. Columns are joined in the order they are displayed (top to bottom) and the result is shown in the Parser Preview.
Re-order any join/field by hovering over the item with a mouse to reveal the grab handle. Click and drag the grab handle to reposition the item in the list.
You may need to split or exclude data within a single column. Add a Separator field and exclude specific text (as specified in the Separator format field).
When using Row Series format, column joins and data type for the Value column must be configured per series. Use the dropdown data type selector and properties toggle button for each series/row.
Quality codes can be associated with each column, either by splitting the column or joining an additional column containing the quality codes.
All columns can be re-used as joins on other columns. For example you can have multiple parameter columns that all share the same quality code value by adding the quality code column as a join on each parameter column.
You can optionally specify a format and format filter to include or exclude the field or the entire record based on the data matching the specified format. In the example below all records will be excluded where the data in Column 6 (Quality) equals 50.