DataScope Select - REST API

Close Menu
Expand All Collapse All
Introductory Tutorials Tutorials Introduction Programming without SDK Tutorial REST API Tutorials REST API Tutorials Introduction REST API Tutorial 1: Connecting to the DSS server REST API Tutorial 2: On Demand End of Day Extraction REST API Tutorial 3: On Demand intraday extraction, embargo REST API Tutorial 4: On Demand price history extraction REST API Tutorial 5: On Demand corporate actions extraction REST API Tutorial 6: On Demand ownership data extraction REST API Tutorial 7: On Demand T&C extraction REST API Tutorial 8: On Demand composite extraction REST API Tutorial 9: On Demand extraction: instrument list REST API Tutorial 10: GUI control calls: immediate extract REST API Tutorial 11: Search by Instrument REST API Tutorial 12: Search for an Equity REST API Tutorial 13: Search for a Future or Option REST API Tutorial 14: On Demand price history extraction raw .Net SDK Tutorials .Net SDK Tutorial 1: Connecting to the DSS server .Net SDK Tutorial 2: GUI control calls: List, report, sched .Net SDK Tutorial 3: GUI control calls: Validate, extraction .Net SDK Tutorial 4: GUI control calls: Embargo, note files .Net SDK Tutorial 5: On Demand: EoD extraction .Net SDK Tutorial 6: On Demand: EoD extraction, file I/O .Net SDK Tutorial 7: On Demand: large instrument lists .Net SDK Tutorial 8: On Demand: Terms & Conditions .Net SDK Tutorial 9: On Demand: Composite extraction .Net SDK Tutorial 10: Search by Instrument .Net SDK Tutorial 11: Search for Equity .Net SDK Tutorial 12: Search for Future or Option

.Net SDK Tutorial 4: GUI control calls: Embargo, note files

Last update July 2019
Environment Windows
Language C#
Compilers Microsoft Visual Studio 2012/2013
Prerequisites DSS login, internet access, having done the previous tutorials
Source code Download .Net SDK Tutorials Code

Tutorial purpose

This is the fourth tutorial in a series of .Net SDK tutorials. It is assumed that the reader has acquired the knowledge delivered in the previous tutorials before following this one.

This tutorial builds on the previous tutorial, and covers the following new topics:

  • Handling partial and late data deliveries when requested intraday data is under embargo. The DSS REST API has new functionality for this, which the DSS SOAP API does not have.

    Partial extractions occur if data is not extracted and delivered in one go, but in several subsets at different points in time, which could be the case if some of the data is under embargo.
  • Looking at all the resulting extraction files, to see the contents of the RIC maintenance note and extraction notes files, as these contain useful information.

 

Table of contents

 

Getting ready

Opening the solution

The code installation was done in Tutorial 1.

Opening the solution is similar to what was done in the previous tutorials:

  • Navigate to the \DSS REST API\Tutorial 4\Learning folder.
  • Double click on the solution file rest_api_adhoc.sln to open it in Microsoft Visual Studio.

 

Referencing the DSS SDK

Before anything else, you must reference the DSS REST API .Net SDK in the Microsoft Visual Studio project.

Important: this must be done for every single tutorial, for both the learning and refactored versions.

This was explained in the tutorial 2; please refer to it for instructions.

 

Viewing the C# code

In Microsoft Visual Studio, in the Solution Explorer, double click on Program.cs and on DssClient.cs to display both file contents. Each file will be displayed in a separate tab.

 

Setting the user account

Before running the code, you must replace  YourUserId  with your DSS user name, and  YourPassword  with your DSS password, in these 2 lines of Program.cs:

        private static string dssUserName = "YourUserId";
        private static string dssUserPassword = "YourPassword";

Important reminder: this must be done for every single tutorial, for both the learning and refactored versions.

Failure to do so will result in an error at run time (see Tutorial 1 for more details).

 

Understanding data embargoes

What is an embargo ?

An embargo is a temporary block on the extraction of requested data.

Very recent data is more useful than older data. As such, its access is usually subject to fees imposed by the data provider, which could be an exchange or a third party.

DataScope Select data is governed by exchange rules that require delays in the release of real-time data. Exchange delays are enforced for Intraday Pricing and Premium EOD Pricing (which uses intraday snapshots) extractions, unless you are permissioned to access real-time exchange data. An agreement and contract with the provider are therefore required to access such data. User accounts are subject to a permissioning mechanism, which blocks or allows access to specific data.

Requesting EoD (End of Day) data is usually not an issue (except if it is premium EOD pricing), because the data is from a preceding day and does not have much market value. But embargoes can occur when requesting very recent intraday (i.e. data from the current day) data, for which the user is not permissioned. For such requests the DSS REST API checks the DSS account for real time permissions for all exchanges (or other data providers) in the request.

Embargo delays apply only to users who are not permissioned to receive real-time intraday data during the exchange's operating hours. No embargo is applied if you are permissioned for real-time intraday data, or if you request data outside of the exchange's operating hours.

You can view your exchange and specialist data permissions in the DSS web GUI by clicking on your account name top right, selecting Preferences, and then Third-Party Content Permissions:

There are 6 permission levels:

  • Realtime: fee-liable exchanges or specialists for which you can access real-time content (including fee-liable delayed exchanges you subscribe to).

    You will receive this content in extractions as soon as it is available.
  • Delayed: fee-liable exchanges or specialists for which you will access delayed content.

    You will receive delayed content in extractions once the embargo period applied by the exchange or specialist has expired.
  • No Intraday: fee-liable exchanges or specialists for which you cannot access content during normal trading hours.

    Note: content snapped outside of market hours is free of fees.
  • None: fee-liable exchanges or specialists for which you have no access to content, regardless of the time of day.

    Content is available upon subscription.
  • On: non-fee-liable specialists content for which you are permissioned.
  • Off: non-fee-liable specialists content for which you are not permissioned.

Contact your local account manager or sales specialist if you have queries on real-time intraday permissions.

 

As time passes, the data set that was requested becomes older, and after a certain delay it becomes free to access. At that point the embargo is lifted.

The delay for data to become free, or embargo duration, depends on the exchange, and can also vary during the day:

  • It can vary in length throughout the trading day.
  • An embargo does not apply outside the trading hours of an exchange.
  • Typical embargo durations during trading hours range from 15 minutes to 1 hour; some specific instruments have longer delays, like 12 hours for WIBOR (Warsaw Interbank Deposit Rates), and 24 hours for ICE LIBOR.

 

Note:

  • Even if no embargoes apply, an extraction can take several minutes.

    This depends on several factors: requested data amount, data origin, DSS server load.

 

DSS embargoes handling, partial deliveries and preference settings

The DSS server waits for the embargo to be lifted and only extracts the data at the end of the embargo period.

If there is a mix of embargoed and not embargoed data, behavior can be tuned in the DSS general preferences settings, using the DSS web GUI:

The first setting allows you to decide if you want the RIC Maintenance Report file or not:

It contains information such as instrument identifier name changes, which could have occurred during the time period covered by the request. Instrument identifiers can change name, for instance if a company changes its name, is bought or merges, or simply for normalization reasons. This is not a frequent occurrence. The probability of an instrument identifier name change is obviously higher if the request is for a time series over a long time period.

 

Concerning embargoes, you can decide if you want:

  • A single delivery of all data at the end of all embargoes, or
  • Partial data extractions and deliveries spread through time as embargoes are lifted.

 

You can choose between:

  • Receiving the entire set of requested data at the end of the longest embargo period, in one single file:

  • Receiving several partial deliveries: first the non embargoed data, then the embargoed data.

One can select to receive all embargoed data in one single file:

or to receive several partial files, one per embargo period (if there are more than one):

In the case of partial deliveries, it is also possible to select between delta (differential) and full deliveries:

 

As of DSS 13.0, it is possible to limit an intraday extraction to only non-embargoed data, to exclude embargoed instruments from the resulting extraction. This can be set in the GUI when manually creating the template. With the SDK it can be done by setting the following condition when creating the report template:

Condition = new IntradayPricingCondition { OnlyNonEmbargoedData = true }

Eventual embargoed instruments will be identified in the corresponding notes file.

 

How to use this tutorial

This tutorial, a variation of Tutorials 2 and 3, shows how to handle embargoes.

If you want to test embargoes with this tutorial, you will need to find at least one instrument subject to embargo.

For the DSS user account we used when making this tutorial, an embargo applied to the Xetra exchange (Germany), which has a 15 minutes embargo during market open. This will not necessarily be the case for your DSS account, as its data entitlements might be different.

The list of instruments defined in the code of this tutorial covers 10 different exchanges, from APAC, ASIA and EMEA. You can change the instrument identifiers in the code, or add more to the list, to retrieve data from other exchanges, then run the tutorial, the aim being to find which data is subject to embargo. As explained above (under section What is an embargo ?) you can view your exchange permissions in the DSS web GUI in the Preferences screen for Third-Party Content Permissions.

Once you have determined that one or more instruments are subject to embargo, you can change the DSS preferences to influence the behavior, test various cases and understand the mechanisms at play.

 

Understanding the code

We shall only describe what is new versus the previous tutorials.

DssClient.cs

This is nearly the same as the refactored version of Tutorial 3, except for the leading comment, and two added methods.

The first new method creates a report template for intraday pricing:

public string CreateIntradayPricingReportTemplate(
    string reportTemplateName, ReportOutputFormat reportOutputFormat, string[] contentFieldNames)
{
    IntradayPricingReportTemplate reportTemplate = new IntradayPricingReportTemplate
    {
        Name = reportTemplateName,
        OutputFormat = reportOutputFormat,
        CompressionType = ReportCompressionType.None,
        Delimiter = ReportDelimiter.Pipe,
        DeliveryType = ReportDeliveryType.None,
        //Condition = new IntradayPricingCondition { OnlyNonEmbargoedData = true }

    };
    reportTemplate.ContentFields.AddRange(contentFieldNames);

    extractionsContext.ReportTemplateOperations.Create(reportTemplate);
    return reportTemplate.ReportTemplateId;
}

This method is very similar to the CreateEodReportTemplate method we created in Tutorial 2.

The main difference is the declaration of the report template, which replaces the EndOfDayPricingReportTemplate with an IntradayPricingReportTemplate.

New as of SDK 13.0: if you only want non-embargoed data, you can uncomment the condition in the report definition.

The second new method creates a non recurring timed schedule that will launch a single extraction on a specific date, triggered at a specified time:

public string CreateTimedSchedule(
    string scheduleName, string instrumentListId, string reportTemplateId,
    DateTimeOffset scheduleDate, int scheduleHour, int scheduleMinute, string outputFileName)
{
    Schedule schedule = new Schedule
    {
        Name = scheduleName,
        TimeZone = TimeZone.CurrentTimeZone.StandardName,
        Recurrence = ScheduleRecurrence.CreateSingleRecurrence(scheduleDate, isImmediate: false),
        Trigger = new TimeTrigger
        {
            LimitReportToTodaysData = false,
            At = new[] { new HourMinute { Hour = scheduleHour, Minute = scheduleMinute } }
        },
        ListId = instrumentListId,
        ReportTemplateId = reportTemplateId,
        OutputFileName = outputFileName
    };

    extractionsContext.ScheduleOperations.Create(schedule);
    return schedule.ScheduleId;
}

Now that we have seen several schedule definition examples in the previous tutorials, this code should be easy to understand, it is just a variation of the others with a different definition for the recurrence and the trigger.

 

No additional explanations are required as the rest of the code was described in the previous tutorial.

 

Program.cs

At the top of the code we see the DSS using directives list is shorter, because we are using less API calls.

Creating the instrument list

Like in the previous tutorials, we created an instrument list. The helper method that defines the instrument identifiers now contains a larger set of instruments, to have more chances of running into an embargo. The permissioning of the account used to create this tutorial does not include real-time for the German exchange Xetra, so the last instrument (ALVG.DE) is embargoed. Below it is commented out to test the code without embargo, un-commenting it will generate an embargo:

static IEnumerable<InstrumentIdentifier> CreateInstrumentIdentifiers()
{
    IEnumerable<InstrumentIdentifier> instrumentIdentifiers = new[] 
    {
        new InstrumentIdentifier
            { Identifier = "MX.TO", IdentifierType = IdentifierType.Ric },
        new InstrumentIdentifier
            { Identifier = "IBM.N", IdentifierType = IdentifierType.Ric },
        new InstrumentIdentifier
            { Identifier = "CSCO.O", IdentifierType = IdentifierType.Ric },
        new InstrumentIdentifier
            { Identifier = "0001.HK", IdentifierType = IdentifierType.Ric },
        new InstrumentIdentifier
            { Identifier = "6502.T", IdentifierType = IdentifierType.Ric },
        new InstrumentIdentifier
            { Identifier = "047810.KS", IdentifierType = IdentifierType.Ric },
        new InstrumentIdentifier
            { Identifier = "CARR.PA", IdentifierType = IdentifierType.Ric },
        new InstrumentIdentifier
            { Identifier = "UBSG.VX", IdentifierType = IdentifierType.Ric },
        new InstrumentIdentifier
            { Identifier = "VOD.L", IdentifierType = IdentifierType.Ric },
        //new InstrumentIdentifier
        //    { Identifier = "ALVG.DE", IdentifierType = IdentifierType.Ric }
    };
    return instrumentIdentifiers;
}

 

Creating the report template

The helper method that defines the requested field names now contains a larger set of fields, including descriptive and real time data fields. Note that an embargo will occur based on permissions, not on the field list:

static string[] CreateRequestedFieldNames()
{
    string[] requestedFieldNames = {
        "Instrument ID", "Security Description", "ISIN", "Asset ID",
        "Ask Price", "Ask Size", "Ask Yield", "Asset Swap Spread",
        "Asset Category", "Asset Category Description",
        "Asset Status", "Asset Status Description",
        "Asset Type", "Asset Type Description", "Asset SubType", "Asset SubType Description" };
    return requestedFieldNames;
}

We call our new DSS client helper method to create the report template:

string reportTemplateId = dssClient.CreateIntradayPricingReportTemplate(
    "myEmbargoIntradayPricingTemplateName",
    ReportOutputFormat.CommaSeparatedValues,
    requestedFieldNames);

 

Creating the schedule

We call our new DSS client helper method to create the schedule. Similar to the previous tutorial, for testing purposes it is set to run 2 minutes in the future, that way we quickly get results every time we run the tutorial:

int waitMinutes = 2;
DateTimeOffset dateTimeNow = DateTimeOffset.Now;
DateTimeOffset dateTimeSchedule = dateTimeNow.AddMinutes(waitMinutes);
int scheduleHour = dateTimeSchedule.Hour;
int scheduleMinute = dateTimeSchedule.Minute;

string timedScheduleId = dssClient.CreateTimedSchedule(
    "myTimedSchedule",
    instrumentListId,
    reportTemplateId,
    dateTimeNow, scheduleHour, scheduleMinute,
    "myTimedExtractionOutput.csv");

Note: instead of a timed schedule in a very close future we could also have defined an immediate schedule, the end results would in essence be the same.

 

Waiting for the extraction to complete

The extraction will be launched by the schedule.

The next step is to wait for the extraction to complete.

In the previous tutorial we used the blocking API call WaitForNextExtraction. In the present case we want to handle embargoes, which might result in partial deliveries scattered in time, and we want to have the possibility to handle them as they come. For that reason we proceed differently, and not use blocking calls.

We will first wait for the DSS server to initiate the schedule’s extraction, by periodically retrieving the extraction report, which is done by referring to its schedule Id. We check for the first or default extraction (remember a schedule could be for a single extraction, or recurring):

ReportExtraction extraction = null;
while (extraction == null)
{
    extraction =
        extractionsContext.ReportExtractionOperations.GetByScheduleId(timedScheduleId).FirstOrDefault();
    if (extraction == null)
    {
        Console.WriteLine("{0:T} Waiting for server to initiate scheduled extraction ... ", DateTime.Now);
        Thread.Sleep(60 * 1000);  //Wait 1 minute
    }
}

 

We then display the extraction status and detailed status:

string extractionReportId = extraction.ReportExtractionId;

Console.WriteLine("DSS server has initiated the scheduled extraction.\n");

Console.WriteLine("{0:T} Initial extraction status:", DateTime.Now);
Console.WriteLine("Report id: " + extractionReportId +
    "\nStatus: " + extraction.Status +
    " - Detailed status: " + extraction.DetailedStatus);

 

The extraction will first be queued, its status will be pending. Once triggered, it will be processed. During processing it might be temporarily embargoed, for the duration of the embargo delay. Once all the data is extracted, it will be complete.

The following chart illustrates the extraction events as they progress in time from left to right, with the related values of the extraction Status and DetailedStatus:

 

As mentioned in the previous tutorial, the extraction will deliver at least 3 files:

1.      The RIC maintenance note, with information on instrument identifier name changes, etc. This file could be empty.

2.      The extracted data itself, which could be delivered in one or several files, depending on the occurrence of an embargo and preference settings. See DSS embargoes handling and preference settings.

3.      The extraction notes, containing details of the extraction, including information on applied embargoes.

 

Files are not all created simultaneously, so we must poll to check for newly created files. Every time we retrieve the file list, the entire list is returned, so we must remember which ones were already there to detect the new ones. For this we shall store the retrieved files, and their names (which will serve to filter between files we already stored and newly arrived ones, because it is much easier to compare the file names than the files themselves):

//Create a list to store the files that were retrieved:
List<ExtractedFile> retrievedFiles = new List<ExtractedFile>();
//Create a hash set to store the names of the files that were retrieved:
HashSet<string> retrievedFileNames = new HashSet<string>();

 

Our code resides in a loop. We poll for the file list by loading an extraction context property called Files. The toolkit only populates the extraction properties when you explicitly call LoadProperty(). These properties are cleared when retrieving an updated extraction report to check its status (which we do at the end of the loop). That is why we load the properties at each iteration of the loop, to populate the files in extraction.Files. For debugging we also display the number of files:

while (true)
{
    extractionsContext.LoadProperty(extraction, "Files");
    Console.WriteLine("Retrieved the updated file list. Extraction files count: " +
        extraction.Files.Count);

We use our hash set retrievedFileNames that contains the retrieved file names, and add to newFiles only the files whose name is not contained in retrievedFileNames. The aim is to ensure we don’t treat the same file twice:

    IEnumerable<ExtractedFile> newFiles =
        extraction.Files.Where(f => !retrievedFileNames.Contains(f.ExtractedFileName));

We add the new files to our list of retrieved files, and add their names to the hash set (to avoid picking them up a second time). If we wanted to treat files as they arrived, the commented place holder is where we could do it:

    foreach (ExtractedFile newFile in newFiles)
    {
        retrievedFiles.Add(newFile);
        retrievedFileNames.Add(newFile.ExtractedFileName);
        Console.WriteLine("New file {0} name: {1}", retrievedFiles.Count, newFile.ExtractedFileName);
        //If we want to treat new files immediately, we can do it here.
    }

We will exit the loop once the extraction status is Completed, which implies that all files have been created:

    if (extraction.Status == ReportExtractionStatus.Completed)
        break;

If the extraction status is not yet Completed, we wait for a minute or so (there is no sense in loading the network and DSS server with high frequency requests), before retrieving an updated extraction report to check the status again:

    Thread.Sleep(60 * 1000);  //Wait 1 minute

    //Retrieve this extractions updated report (by referring to its Id):
    extraction = extractionsContext.ReportExtractionOperations.Get(extractionReportId);
    Console.WriteLine("\n{0:T} Retrieved the updated extraction report:", DateTime.Now);
    Console.WriteLine("Status: " + extraction.Status +
    " - Detailed status: " + extraction.DetailedStatus);

}  //End of while loop

As stated previously, the extraction properties are emptied by this last API call in the loop, to retrieve the extractions updated report. That is why LoadProperty() is inside the loop, at the start, to refresh the files list.

 

When there is no embargo, things are quite quick as we are not waiting for an embargo delay (and there are few instruments, so the extraction is not very lengthy). The result looks like this:

Note the total number of files is 3, the 2 notes files and 1 data file. There is only one data file because there is no embargo, so partial deliveries are not required. The data file name could vary compared to this example; for an explanation refer to section Data file names further in this tutorial.

 

In case of embargo, the result is different as we must wait for the embargo delay:

The first 3 files are generated quite quickly. This time the data file is a partial one, including only the non embargoed data. Using the DSS preference settings (see DSS embargoes handling and preference settings) you can choose if you want to receive partial files, or only one full file at the end of the embargo delay. In this case we have opted for partial files, which is why we have received a (partial) data file. The data file name could vary compared to this example; for an explanation refer to section Data file names further in this tutorial.

These messages continue every minute, until the embargo has been lifted at the end of the embargo delay (15 minutes in this case), allowing the extraction to complete:

Note that now there are 4 files instead of 3, because we have a second partial data file, due to the embargo. The data file name could vary compared to this example; for an explanation refer to section Data file names further in this tutorial.

 

Retrieving the extracted data

The number of files generated depends on the DSS preferences settings. Usually, 3 files are generated, but if there are partial deliveries there will be more. To display the files we use the helper method we created in the previous tutorial:

int fileNumber = 0;
foreach (ExtractedFile retrievedFile in retrievedFiles)
{
    fileNumber++;
    DisplayFileDetailsAndContents(extractionsContext, retrievedFile, fileNumber);
}

 

RIC maintenance note

This file is only generated if it is enabled in the DSS general preferences settings, as explained above under section: DSS embargoes handling, partial deliveries and preference settings.

It contains information such as instrument identifier name changes, which could have occurred during the time period covered by the request. Instrument identifiers can change name, for instance if a company changes its name, is bought or merges, or simply for normalization reasons. This is not a frequent occurrence. The probability of an instrument identifier name change is obviously higher if the request is for a time series over a long time period.

Our current request is only for intraday data, for which there are no such events, so the file is empty:

 

Initially extracted data

The second file here is the first extracted data file. Depending on preference settings and an eventual embargo, we could have one or several files.

In this example we have an embargo on ALVG.DE, and our DSS preferences are set for “Partial” and “Delta” files:

Delta“ files only contain the data that is not delivered in another file, so here it only contains the non embargoed data:

Note: the data file type is “Partial”, because partial files are enabled in the DSS preferences. This does not imply that there will be more files; there could be one single file (if there is no embargo) or several files (in case of embargo). If partial files are enabled in the DSS preferences, the data file type is always “Partial”.

The data type would be “Full” if partial files were disabled, in which case there would only be one single data file.

 

What if the DSS preferences were not set for “Delta” files ?

In that case this first data file contains all instruments of the request; the embargoed ones (ALVG.DE) are in the list but do not contain any data:

 

Extraction notes

The file contains extraction details, including information on applied embargoes. If there was no embargo this is what we would see:

 

In the case of an embargo (here it is 15 minutes, on ALVG.DE) the output would be slightly different:

 

Subsequently extracted data

In the case of an embargo and partial deliveries we get another file. Our DSS preferences are set for delta files:

Delta“ files only contain the data that is not delivered in another file, so here it only contains the non embargoed data:

The data file name could vary compared to this example; for an explanation refer to section Data file names further in this tutorial.

 

What if the DSS preferences were not set for delta files ?

In that case:

This last data file would contain all the data for all instruments, i.e. the same content as the previous data file but this time with the embargoed data added:

 

Cleaning up

Like in the previous tutorial, the instrument list, report template and schedule are deleted.

 

Full code

The full code can be displayed by opening the appropriate solution file in Microsoft Visual Studio.

 

Summary

List of the main steps in the code:

  1. Authenticate by creating an extraction context.
  2. Create an array of financial instrument identifiers.
  3. Create and populate an instrument list, by appending the array of financial instrument identifiers to it.
  4. Create the report template, using a defined template.
  5. Create an extraction schedule.
  6. Wait for the extraction to complete by checking the extraction report.
  7. In a polling loop:
  8. Check the file list, retrieve new files.
  9. Check the extraction status, exit if it is Completed.
  10. Wait
  11. Retrieve an updated extraction report
  12. Retrieve the extracted data.
  13. Cleanup.

 

Code run results

Build and run

Don’t forget to reference the DSS SDK, and to set your user account in Program.cs !

 

Successful run

Depending on the occurrence or not of an embargo, and preference settings for partial files, results will be slightly different. The following 4 sections show different outcomes, with / without embargo, with / without partial files. The data file name(s) vary between these 4 cases; they are summarized in section Data file names, further in this tutorial.

Variants with / without delta setting are not included here, the only difference being the contents of the last partial file, which was illustrated above at the end of section: Retrieving the extracted data.

Intermediary results are discussed at length in the code explanations in the previous section of this tutorial.

 

Case 1: no embargo, settings for no partial files

After running the program, and pressing the Enter key when prompted, 3 files will be generated, and the final result should look like this:



Notes:

  • The data file name does not contain any delay indication, because partial files are disabled.
  • The data file type is “Full”, because partial files are disabled, so there will only be one single data file.





 

Case 2: no embargo, settings for partial files

After running the program, and pressing the Enter key when prompted, 3 files will be generated, and the final result should look like this:



Note: the data file type is “Partial”, because partial files are enabled in the DSS preferences. This does not imply that there will be more files; there could be one single file (if there is no embargo) or several files (in case of embargo). If partial files are enabled in the DSS preferences, the data file type is always “Partial”.

 

Case 3: embargo, settings for no partial files

After running the program, and pressing the Enter key when prompted, 3 files will be generated, and the final result should look like this:

At the scheduled time, there is no data file. As we don’t want partial files we must wait for the end of the embargo delay.







At the end of the embargo delay, a single data file is delivered, it contains all the data (embargoed and not embargoed).

 

Case 4: embargo, settings for partial files

After running the program, and pressing the Enter key when prompted, 4 files will be generated, and the final result should look like this:





Note: the data file type is “Partial”, because partial files are enabled in the DSS preferences. This does not imply that there will be more files; there could be one single file (if there is no embargo) or several files (in case of embargo). If partial files are enabled in the DSS preferences, the data file type is always “Partial”.



 

Potential errors and solutions

If the user name and password were not set properly, an error will be returned. See Tutorial 1 for details.

If the program attempts to create an instrument list that has the same name as that of an instrument list that is already stored on the DSS server, an error will be generated. Similar errors will arise for report template or extraction schedule that has the same name as an existing one. See Tutorial 2 for details.

Depending on the permissioning of your DSS account, you will have different results. This is not an error but just the result of different data entitlements. To experiment the different use cases with the sample code, you can check your Permissioning, and try changing the list of instruments to get to a situation where some are under embargo and others are not. This might not always be possible; in extreme cases you might have all instruments under embargo (if you have no real-time Permissioning), or none at all (if you have full Permissioning, which is rare).

 

Data file names

Depending on the DSS general preference settings and the presence or not of an embargo, the number and name(s) of the generated data file(s) will vary. Their common root can be set using property OutputFileName when creating the schedule, in this tutorial it was set to myTimedExtractionOutput.csv. If it was not set DSS will generate a default name. The output of the 4 different run results described above is illustrated in the following table:

Embargoed data ? DSS set for partial deliveries ? Name of file extracted at scheduled time Name of file extracted after embargo delay File type
No No myTimedExtractionOutput.csv (no file) Full
No Yes myTimedExtractionOutput.0min.csv (no file) Partial
Yes No (no file) myTimedExtractionOutput.csv Full
Yes Yes myTimedExtractionOutput.0min.csv myTimedExtractionOutput.15min.csv Partial

 

 

 

 

 

The name of the second file in the last line of the table reflects the embargo duration. If the embargo had lasted 30 minutes, the name would have been myTimedExtractionOutput.30min.csv.

In the case of embargoed data and partial deliveries, you could also run into the case of several embargoes with different durations. In that case there would be more files, as DSS would generate one file per embargo duration.

 

Understanding the refactored version

Explanations

DSS client helper class file: DssClient.cs

Using statements were added, for the new API calls in the new method.

One new method was added, to wait for the DSS server to initiate a schedule’s extraction:

public ReportExtraction WaitForExtractionInitiation(
    ExtractionsContext extractionsContext, string scheduleId)
{
    ReportExtraction extraction = null;
    while (extraction == null)
    {
        extraction =
            extractionsContext.ReportExtractionOperations.GetByScheduleId(scheduleId).FirstOrDefault();
        if (extraction == null)
        {
            Console.WriteLine("{0:T} Waiting for server to initiate scheduled extraction ... ", DateTime.Now);
            Thread.Sleep(60 * 1000);  //Wait 1 minute
        }
    }
    return extraction;
}

 

Main program file: Program.cs

To wait for the DSS server to initiate the schedule’s extraction, we call our new DSS client helper method:

ReportExtraction extraction =
    dssClient.WaitForExtractionInitiation(extractionsContext, timedScheduleId);

 

To wait for the extraction to complete, and retrieve the files, we call a helper method:

retrievedFiles = RetrieveExtractionFiles(extractionsContext, extraction);

As this helper method is very specific to our workflow, we declare it in Program.cs instead of DssClient.cs, after the main code. It contains the polling loop that checks for the generated files and waits for the extraction to complete. If extracted files are to be treated as they are generated, without waiting for the end of the embargo, the call to do so can be inserted in the foreach loop. This is illustrated below by commented code, which can be un-commented to see it in action:

static List<ExtractedFile> RetrieveExtractionFiles(ExtractionsContext extractionsContext, ReportExtraction extraction)
{
    string extractionReportId = extraction.ReportExtractionId;

    //Create a list to store the files that were retrieved:
    List<ExtractedFile> retrievedFiles = new List<ExtractedFile>();
    //Create a hash set to store the names of the files that were retrieved:
    HashSet<string> retrievedFileNames = new HashSet<string>();

    while (true)
    {
        //(Re-)load the files list, some may have arrived since the last loop iteration.
        //Note:
        //- The toolkit only populates collections inside entities when you explicitly call
        //  LoadProperty(), so we do that now to populate the files in extraction.Files.
        extractionsContext.LoadProperty(extraction, "Files");
        Console.WriteLine("Retrieved the updated file list. Extraction files count: " +
            extraction.Files.Count);

        //Generate a list of new files, fitering out already retrieved ones:
        IEnumerable<ExtractedFile> newFiles =
            extraction.Files.Where(f => !retrievedFileNames.Contains(f.ExtractedFileName));
        /*
        //The following also works, without requiring retrievedFileNames:
            IEnumerable<ExtractedFile> newFiles =
                extraction.Files.Where(newfile =>
                    !retrievedFiles.Any(retfile =>
                        newfile.ExtractedFileName == retfile.ExtractedFileName));
        */

        //Add the new files to the list of retrieved files:
        foreach (ExtractedFile newFile in newFiles)
        {
            retrievedFiles.Add(newFile);
            retrievedFileNames.Add(newFile.ExtractedFileName);
            Console.WriteLine("New file {0} name: {1}", retrievedFiles.Count, newFile.ExtractedFileName);
            //If we want to treat new files immediately, we can do it here.
            //Uncomment the next line to see that in action:
            //DisplayFileDetailsAndContents(extractionsContext, newFile, retrievedFiles.Count);
        }

        //Exit loop if the extraction is completed:
        if (extraction.Status == ReportExtractionStatus.Completed)
            break;

        Thread.Sleep(60 * 1000);  //Wait 1 minute

        //Retrieve this extractions updated report (by referring to its Id):
        extraction = extractionsContext.ReportExtractionOperations.Get(extractionReportId);
        Console.WriteLine("\n{0:T} Retrieved the updated extraction report:", DateTime.Now);
        Console.WriteLine("Status: " + extraction.Status +
            " - Detailed status: " + extraction.DetailedStatus);
        //Extraction properties are emptied when retrieving the extractions updated
        //report, hence the need to call LoadProperty() to refresh the files list,
        //at the start of the loop.

    }  //End of while loop
    return retrievedFiles;
}

Note: the code contains a (commented) alternative version of the filter which generates a list of new files. This is just to illustrate that our goals can be achieved in different ways.

 

Full code

The full code can be displayed by opening the appropriate solution file in Microsoft Visual Studio.

 

Build and run

Don’t forget to reference the DSS SDK, and to set your user account in Program.cs !

 

Conclusions

This tutorial introduces the new DSS features for embargo handling, and details the different possible outcomes based on DSS preference settings and eventual embargoes.

 

Now move on to the next tutorial, which uses simplified high level API calls, to:

  • Create an instrument list array, outside of the DSS server.
  • Create an array of data field names, outside of the DSS server.
  • Create and run an on demand End of Day data extraction on the DSS server, wait for it to complete.
  • Retrieve and display the data.

​Cleanup will no longer be required.

 

Tutorial Group: 
.Net SDK Tutorials