Select Page

Data as the new oil

Data is a vast and precious commodity, it has even been likened to oil – in that it will be central to everything, yet many organisations struggle to obtain its fullest potential. So you have to ask yourself the question why is that? The answer you often get is Data is Difficult. However in actuality its not the data that’s difficult but the processes that organisations build around it that make it appear difficult. These processes are usually formalised under a fancy banner called data Governance, but in reality this leaves organisations short changed and missing out on opportunities. Our philosophy at PyCell is to make getting most from your data easy by overcoming the challenges outlined and to give you a set of tooling which will make you wonder “Why did my IT department think this was difficult?”

 

Data: The key challenges

Below are some of the key challenges we have dealt with first hand over the last 15 years as technology professionals in financial services. All of these are interlinked some more than others, however they culminate in expensive tools, frustration for users and long delivery times

  • Data as a service is poorly understood – Companies I have worked with exhibit the same pattern. They focus on getting the data into a database or a warehouse and build their infrastructure and technologies just around getting the data landed. Once ingested it comes under the control of a small subset of personnel who see themselves as it’s guardians. Users and others in house systems who want to access are simply at their mercy or have to build bespoke solutions for each access pattern whether it be another automated process or ad-hoc access by users. This leads to long and expensive delivery times potentially resulting in lost market opportunities.
  • Disparate datasources that are poorly managed – Many companies generate data from many sources, however they are held in different stores with many  different implementations. Where is is held together it cannot be joined/enriched as the focus has been on ingestion. This forces a behaviour of significant work on ETL to make the it usable.
  • Expensive Implementation – Having worked at several financial institutions generating vast amounts of risk data, each in its own unique way and each claiming to be the ‘best on the street’. The costs of building, maintaining and extending functionality of these processes is eye watering running into tens of millions as people with ‘experience’ are needed to build and maintain these platforms.
  • Bespoke requirements for users result in ‘cloned’ datasets – This is linked to the first point, because data as a service is poorly understood means it’s difficult to access. Additional complexity is introduced as different areas of an organisation want to get different views on the same data, typically end up creating a clone to meet their specific needs. They may also end up applying their own overrides and mappings leading to pollution, fragmentation and leading to a loss of lineage to the original values. This leads to more reconciliation tools, more operational processes many of them manual and labour intensive which ultimately leads to increased costs and difficulty in meeting regulatory requirements, ultimately having a negative impact on the bottom line.
  • IT Department Exhibits a ‘My Data My Controls Mentality’ – This is like previous point is linked to the poor understanding of Data as a Service. IT departments become the ‘guardians’ of the it rather than the provider of tools/services and focus on controlling access so that they can manage the NFRs and SLAs. However if they had focused on building the right services and the right metadata around what is held, its ownership, its lineage the ‘control freak’ mentality would not be required and users will be given the responsibility of managing and owning their data.

3 Steps to great insights

So how does PyCell differ (read more here about our offering). We use the latest data processing libraries to allow you to bring your disparate data sets together quickly and efficiently. Our goal is a 3 stage process. Load, Analyse, Decide.

Load – We provide connectors and ingestion tools for multiple file formats and server to server transfers. The biggest thing here is though we don’t manipulate the data, this stage of the process is all about bringing your data in its original form.

Analyse – This is where the real magic happens. Our intelligent tooling allows you to categorise, organise your data. Intelligent tooling such as SmartColumns allows us to do the grunt work in filtering, tagging and run complex functions giving you answers more quickly and consistently in a repeatable manner. Combined with beautiful charting makes PyCell the only tool your business will need.

Decide – Either you can make decisions from our vast array of analysis tools or you can use the power of our Programs toolkit to write and execute bespoke routines that will run bespoke analysis methods and allow your build intelligent capabilities to make decisions.

We do all of the above more efficiently and cost effectively as our platform is built from the ground up with efficiency, scalability and flexibility at its core. We are are still building this amazing tool so please register and to be the first to get exciting updates regarding our launch and how our tools will change the way you use data.