DataScope Select - REST API

Close Menu
Expand All Collapse All
Introductory Tutorials Tutorials Introduction Programming without SDK Tutorial REST API Tutorials REST API Tutorials Introduction REST API Tutorial 1: Connecting to the DSS server REST API Tutorial 2: On Demand End of Day Extraction REST API Tutorial 3: On Demand intraday extraction, embargo REST API Tutorial 4: On Demand price history extraction REST API Tutorial 5: On Demand corporate actions extraction REST API Tutorial 6: On Demand ownership data extraction REST API Tutorial 7: On Demand T&C extraction REST API Tutorial 8: On Demand composite extraction REST API Tutorial 9: On Demand extraction: instrument list REST API Tutorial 10: GUI control calls: immediate extract REST API Tutorial 11: Search by Instrument REST API Tutorial 12: Search for an Equity REST API Tutorial 13: Search for a Future or Option REST API Tutorial 14: On Demand price history extraction raw .Net SDK Tutorials .Net SDK Tutorial 1: Connecting to the DSS server .Net SDK Tutorial 2: GUI control calls: List, report, sched .Net SDK Tutorial 3: GUI control calls: Validate, extraction .Net SDK Tutorial 4: GUI control calls: Embargo, note files .Net SDK Tutorial 5: On Demand: EoD extraction .Net SDK Tutorial 6: On Demand: EoD extraction, file I/O .Net SDK Tutorial 7: On Demand: large instrument lists .Net SDK Tutorial 8: On Demand: Terms & Conditions .Net SDK Tutorial 9: On Demand: Composite extraction .Net SDK Tutorial 10: Search by Instrument .Net SDK Tutorial 11: Search for Equity .Net SDK Tutorial 12: Search for Future or Option

.Net SDK Tutorial 9: On Demand: Composite extraction

Last update February 2020
Environment Windows
Language C#
Compilers Microsoft Visual Studio 2012/2013
Prerequisites DSS login, internet access, having done the previous tutorials
Source code Download .Net SDK Tutorials Code

Tutorial purpose

This is the ninth tutorial in a series of .Net SDK tutorials. It is assumed that the reader has acquired the knowledge delivered in the previous tutorials before following this one.

This tutorial is similar to Tutorial 8, but replaces the T&C (Terms and Conditions) extraction with a Composite extraction, which combines T&C and Pricing data.

In this example we retrieve a few composite fields related to annualized dividends and the balance sheet.

Again, we also cross reference instrument codes, i.e. retrieve several instrument codes (RIC; Cusip, ISIN, Sedol, etc.), for a list of instruments (with different instrument codes). For each input instrument code, only 1 RIC is returned (the primary RIC).

Like tutorials 6 and 7, this sample also contains 2 extraction requests, one with the ExtractWithNotes endpoint, the other with the ExtractRaw endpoint, to illustrate the differences between the two.


Table of contents


Getting ready

Opening the solution

The code installation was done in Tutorial 1.

Opening the solution is similar to what was done in the previous tutorials:

  • Navigate to the \DSS REST API\Tutorial 9\Learning folder.
  • Double click on the solution file rest_api_Composite.sln to open it in Microsoft Visual Studio.


Referencing the DSS SDK

Before anything else, you must reference the DSS REST API .Net SDK in the Microsoft Visual Studio project.

Important: this must be done for every single tutorial, for both the learning and refactored versions.

This was explained in the tutorial 2; please refer to it for instructions.


Viewing the C# code

In Microsoft Visual Studio, in the Solution Explorer, double click on Program.cs and on DssClient.cs to display both file contents. Each file will be displayed in a separate tab.


Setting the user account

Before running the code, you must replace  YourUserId  with your DSS user name, and  YourPassword  with your DSS password, in these 2 lines of Program.cs:

        private static string dssUserName = "YourUserId";
        private static string dssUserPassword = "YourPassword";

Important reminder: this must be done for every single tutorial, for both the learning and refactored versions.

Failure to do so will result in an error at run time (see Tutorial 1 for more details).


Understanding the code

We shall only describe what is new versus the previous tutorials.


This is the same as the version of Tutorial 8, except for the leading comment, and 2 added methods, to create and run a Composite extraction, one for JSON formatted data, the other for compressed CSV data.

These are both very similar to the other methods to create and run On Demand extractions, except that this time we declare a new CompositeExtractionRequest. Again, we use as parameters the instrument identifiers and requested field names arrays defined in Program.cs.

Here is the first method, for JSON data:

public ExtractionResult CreateAndRunCompositeExtraction(
    InstrumentIdentifier[] instrumentIdentifiers, string[] contentFieldNames)
    CompositeExtractionRequest extractionComposite = new CompositeExtractionRequest
        IdentifierList = InstrumentIdentifierList.Create(instrumentIdentifiers),
        ContentFieldNames = contentFieldNames                
    //Run the extraction.
    //This call is blocking, it returns when the extraction is completed:
    return extractionsContext.ExtractWithNotes(extractionComposite);

Here is the second one, for compressed CSV data:

public RawExtractionResult CreateAndRunCompositeRawExtraction(
    InstrumentIdentifier[] instrumentIdentifiers, string[] contentFieldNames)
    CompositeExtractionRequest extractionComposite = new CompositeExtractionRequest
        IdentifierList = InstrumentIdentifierList.Create(instrumentIdentifiers),
        ContentFieldNames = contentFieldNames
    //Run the extraction.
    //This call is blocking, it returns when the extraction is completed:
    return extractionsContext.ExtractRaw(extractionComposite);

The Composite extraction request is just one of many possibilities. The Example Application explained in the Quick Start illustrates many more On Demand extractions.



Creating the instrument array

We create an array of 4 instrument identifiers and populate it manually, using the same helper method as in Tutorial 8:

InstrumentIdentifier[] instrumentIdentifiers = PopulateInstrumentIdentifiers();

The method is very simple, as the purpose of this Tutorial is to concentrate on the extraction itself. For a more productized example with details on handling large instrument lists read from file, see tutorial 7.

For the demo we use 4 instruments, each one defined using a different identifier type: 1 Cusip, 1 Ric, 1 Isin and 1 Sedol.

We will use this array when we define the extraction.


Creating the field name array

To create the field name array we proceed just like in the previous tutorials, calling the helper method we created in Tutorial 2:

string[] requestedFieldNames = CreateRequestedFieldNames();

For this demo the helper method defines a list of fields including 5 instrument codes (the use case is cross referencing), and the currency code, and 4 pricing fields (these are not available in a T&C extraction):

static string[] CreateRequestedFieldNames()
    string[] requestedFieldNames = { "RIC", "CUSIP", "ISIN", "SEDOL", "Issuer OrgID",
                                     "Currency Code",
                                     "Annualized Dividend Period Start Date",
                                     "Annualized Dividend Adjusted Gross Amount",
                                     "Balance Sheet - Enterprise Value",
                                     "Balance Sheet - Market Value" };
    return requestedFieldNames;

As a reminder, an explanation on how to choose field names is available in Tutorial 2, at the end of the section on report templates, under the heading: How to choose a report template format, and find out what field names are available ?

We will use this array when we define the extraction.


No report template or schedule creation

On Demand extractions do not require these, as explained in Tutorial 5.


Creating and running an On Demand extraction (for JSON data)

This is what we did in the previous tutorials. The extraction uses the ExtractWithNotes endpoint, which delivers the data in JSON format. We call a new DSS client helper method (the only difference with the methods used in preceding tutorials is the type of data we request):

ExtractionResult extractionResult =
    dssClient.CreateAndRunCompositeExtraction(instrumentIdentifiers, requestedFieldNames);
DssCollection<ExtractionRow> extractionDataRows = extractionResult.Contents;

It is a blocking API call which only returns when the extraction is complete, so there is no need for code to check for extraction completion.


Retrieving the extracted data and notes, writing them to file

The On Demand extraction returns extraction data rows. We process them using the helper methods from tutorial 6:

DisplayAndLogExtractedDataFieldNames(dataOutputFile, extractionDataRows);
DisplayAndLogExtractedDataFieldValues(dataOutputFile, extractionDataRows);

We also process the extraction notes:

DisplayAndLogAndAnalyzeExtractionNotes(notesOutputFile, errorOutputFile, extractionResult);


Creating and running an On Demand extraction (for compressed CSV data)

This is a different possibility we illustrate in this tutorial, like we also did in tutorials 6 and 7. Instead of retrieving JSON formatted data we want compressed CSV data, which is practical for storage, and optimises extractions of large data sets.

Instead of the ExtractWithNotes endpoint (which delivers the data in JSON format), the extraction uses the ExtractRaw endpoint, which delivers compressed CSV formatted data.

Depending on your use case, you can choose either ExtractWithNotes or ExtractRaw.

We call a new helper method, that is very similar to the one we used above:

RawExtractionResult rawExtractionResult =
    dssClient.CreateAndRunCompositeRawExtraction(instrumentIdentifiers, requestedFieldNames);

The difference between this helper method and the previous one is the endpoint, and the fact that it returns a raw extraction result. This also implies that we do not extract data rows from the extraction result contents.


Retrieving the compressed data and notes, writing them to file

We process them using the helper methods from tutorial 6:

dssClient.SaveCompressedData(rawExtractionResult, gzipDataOutputFile);
DisplayAndLogAndAnalyzeRawExtractionNotes(gzipNotesOutputFile, errorOutputFile, rawExtractionResult);


No cleaning up

As stated previously, cleanup on the DSS server is not required, as there is nothing to delete in this case.


Full code

The full code can be displayed by opening the appropriate solution file in Microsoft Visual Studio.



List of the main steps in the code:

  1. Authenticate by creating an extraction context.
  2. Create an array of financial instrument identifiers.
  3. Create an array of field names.
  4. Create an extraction.
  5. Run the extraction, wait for it to complete. Retrieve the extracted data and extraction notes, display them and write them to files
  6. Variant of step 5, for compressed data.

We do not create a report template or schedule, and there is no need to cleanup.


Refactored version

There is none for this tutorial.


Code run results

Build and run

Don’t forget to reference the DSS SDK, and to set your user account in Program.cs !


Successful run

After running the program, and pressing the Enter key when prompted, the final result should look like this:


Potential errors and solutions

If the user name and password were not set properly, an error will be returned. See Tutorial 1 for details.



This tutorial continued with the simplified high level calls for On Demand extraction requests, with a simple use case for instrument cross referencing, and simultaneous price retrieval.

Like tutorials 6 and 7, it also compared the use of two endpoints; one that delivers JSON formatted data, for easy direct treatment, the other that delivers the data as a compressed CSV for effective download and storage of large data sets.

Now move on to the next tutorial, which covers a different topic.


Tutorial Group: 
.Net SDK Tutorials