In this article, we will overview the different layers of a well-built data lake, as well as highlight common pitfalls and challenges to consider when implementing a data lake architecture.
A typical data lake allows for decoupling of storage, processing, and analytics. The data is segmented into a landing, raw, trusted, and curated zones to store data depending on its consumption readiness. Each layer should be designed with cost optimisation, flexibility, and security in mind. Let’s take a closer look at what each layer does and which services are typically used.
AWS allows for a myriad of data producers, and ingestion methods can process structured, unstructured, and semi-structured data loaded with batch or in real-time (e.g., streaming with Kinesis Firehose). The files are stored in the AWS S3 service, which is excellent as it has no storage limit. When data arrives, it can be in any format; this is called schema on read. S3 is also budget-friendly as it delivers automatic storage cost savings when access patterns change with intelligent tiering. Last but not least, it boasts excellent security features, from fine-grain access policies to robust encryption with KMS.
In a Data Lake or Lake House architecture, the data processing layer is responsible for transforming data into a consumable state through data validation, cleanup, normalisation, transformation, and enrichment. AWS Glue is used to track and catalogue the data. Purpose-built components perform a variety of transformations, big data processing (EMR), and near-real-time ETL. Amazon Athena can be used for quick probing into data with SQL-style queries.
This layer provides data access and consumption primarily through S3. SageMaker allows for machine learning models to be trained and deployed. Visualisations and dashboards can be built using tools like AWS QuickSight or offloaded to Tableau (SaaS service). It’s essential to note that the quality of the analytics and visualisations depends on the quality of the data stored in the data lake.
By understanding the different layers of a well-built data lake, businesses can realise the full potential of their data, enabling advanced analytics, generating insights, and improving decision-making processes.
As always, for any data platform needs, contact us at DataPhoenix — small end-to-end data solutions.
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |