Download presentation
Presentation is loading. Please wait.
1
Azure SQL DW 0-100 DWUs
2
PREMIER SPONSOR GOLD SPONSORS SILVER SPONSORS BRONZE SPONSORS SUPPORTERS
3
Who? Me – Simon Whiteley, Principal Consultant Adatis Consulting. Cloud Architect, general dabbler in new tech. Adatis – Business Intelligence & Data Analytics Consultancy, leading the charge in the cloud revolution - Who uses Azure DataWarehouse? The talk itself is a mix between some fundamentals and actual lessons and code from building a large ETL framework, if you’ve got any questions/issues, just get in touch Simon Whiteley
4
Azure SQL DataWarehouse
WHAT? WHERE? HOW? WHY? So this is what we’re here to talk through, : Firstly what Azure SQL DW actually is. How it works, why it works the way that it does. That should put it into context of where to use it, what kind of solutions it’s appropriate for. Then we’ll dive into how to actually use it, some design patterns, some lessons that I wish I knew going into my first build. Finally, if we have time, we’ll talk about why I like it.
5
WHAT? Let’s get started. What is it? Just a SQL Server right?
6
Data Warehouse Massively Parallel Processing Clustered ColumnStores
SO – What’s Azure DataWarehouse and why is this man so excited about it? Well most people have heard of PDW – Parallel Data Warehouse, which was later rebranded as APS. Azure DataWarehouse is essentially APS in the cloud. It’s not identical, but it reuses the APS database engine, it performs due to similar principles. It’s very much an MPP appliance (Massively Parallel Processing for the non-geek). The main storage engine is using clustered columnstores, so it compresses data very well and it’s QUICK. But why is it quick and how does this work? Well to understand this, you need to be aware of Distribution and Partitioning. In a nutshell, you have a ton of small servers working in unison. You want to maximise the number of servers doing the work – that’s distribution. But you don’t want each server to be doing more work than it needs to, so you can use traditional partitioning to minimise data work. So when you run a query, to be fast, you want LOTS of distribution whilst ELIMINATING partitions. But I think we need a better example, let’s take a look at the on-prem APS approach. Distribution & Partitioning
7
PDW/APS Scaling 8 Readers 8 Writers 8 Distributions Control ¼ Rack
APS comes in quarter racks – you have a control node in each full rack and the first ¼ rack provides 8 readers, writers and distributions. Essentially, your data is spread across 8 different SQL Servers, each of which has a thread for reading and writing data. Want it to go faster? You give you friendly Microsoft a call and a large bag of money and you ramp up to…
8
PDW/APS Scaling 16 Readers 16 Writers 16 Distributions Control ¼ Rack
A half rack. You’ve literally doubled your processing capacity and if you data is evenly spread across your distributions, you can effectively double performance. 16 readers, 16 writers each interacting with one of 16 data distributions. If that’s not fast enough and there’s budget burning a hole in your pocket….
9
PDW/APS Scaling 32 Readers 32 Writers 32 Distributions Control ¼ Rack
You can keep scaling up and getting linear performance improvements. However, when we’re adding this new rack, we literally have to take the data that’s spread across 16 nodes and spread it acorss 32. There’s a lot of data movement, it’s not a quick task. But you’re not scaling on the fly – you can’t have a guy down in the server room plugging racks in and out to cater for demand, it would be massively inefficient and it’s not like you stop paying for those racks when they’re out! That’s APS on-prem. But we really want to know about Azure DW – how does that manage to scale? You can literally move a slider and get similar performance improvements within minutes, no matter your data size. ¼ Rack
10
60 Distribution Nodes 600,000,000 Records
The answer is that distribution nodes have been divorced from the readers and writers. Azure DW has a fixed number of distributions – this is currently 60 in the preview of ADW. Your distributions are always allocated to one of the 60 processing nodes. So how to we distribute data properly? Well firstly, there’s an approach called Round Robin – each record coming in will be put on the next node in a sequence, forcing an equal distribution of data. This is the default approach and the most straightforward, but it’s not the most performant. The alternative is to pick a column to perform a Hash distribution on. However by specifying your own distribution method, there are some common pitfalls. 600,000,000 Records
11
60 Distribution Nodes This is what we’re aiming for – 60,000,000 records split evenly across our distributions so there are roughly 1,000,000 records on each node. But what if we hadn’t thought too much about what kind of queries would be handled by the data. Let’s say we distributed based on client key. We’ve got way more than 60 clients (having less would mean there are nodes with no data on it!) and the data is evenly spread. Sounds good?
12
Distribute by client – select * where clientId = 321
If a common query to be run against the warehouse uses the ClientKey in the where clause, ie: it’s a common search predicate, we’ve essentially allocated all of our work to a single node. That node has all of the records associated with our client and so has to crunch through the numbers for the entire order history.
13
Distribute by other – select * where clientId = 321
If we had a different distribution hash, then each of our nodes would be able to do a little work. The total time of the query in this scenario would be the slowest node, all of which should be much faster than our single node working alone. This is a pretty simple example where we’re only talking about querying a single table. In this example we might as well have used a round robin distribution. However, when your query joins multiple tables, it gets interesting. Each individual node still needs to work in isolation, but if some of the records it needs to complete it’s query are on a different node, then we have to temporarily make a copy of the data. This is known as data movement and we generally want to minimise it when running queries. But if we have distributed all tables in the query on the same distribution hash, then each node can run it’s query without referring to other nodes’ data and we get the real performance benefits of MPP. Ok. So that’s distribution in a nutshell, there’s a lot more to it, but hopefully that’s enough of a grounding to get you started. But it raises a question – if we always have 60 distributions, how do we scale ADW? Is it always the same?
14
SELECT DIM.NAME, COUNT(*) FROM FACT
INNER JOIN DIM ON FACT.KEY = DIM.KEY GROUP BY DIM.NAME Here’s a better example – I want the count of records per dimension member for a given fact. A pretty common request, but how does distribution affect this?
15
Distribution: Round Robin
FACT – A1 FACT – A1 FACT – A1 FACT – B3 FACT – A2 FACT – A3 FACT – B2 FACT – A2 FACT – A2 FACT – B1 FACT – A1 FACT – B3 DIM – A1 DIM – B1 Well first lets take some sample records and take a look at how things work when we’re using the standard round robin approach. We’ve got some fact records – for simplicity I’ve included the dimension key on each fact record so we can see what dimensional member they’re hoping to join on. Let’s distribute that – as you can see, each record is placed randomly, making sure we have an even split across the dimensions. There’s no guarantee DIM – A2 DIM – B2 DIM – A3 DIM – B3 Distribution: Round Robin
16
Query Execution FACT – A2 DIM – A1 FACT – A1 FACT – B3 FACT – A1
DIM – B1 DIM – B1 DIM – B1 FACT – A2 FACT – A3 FACT – A2 FACT – B2 DIM – A2 DIM – A2 DIM – A2 DIM – B2 DIM – B2 DIM – B2 Now let’s run the query. Remember, if we’re working in the most efficient way possible, each node will be able to execute it’s query in isolation. Let’s grab one of the distributions as our example. Ok – so I want to join from my fact records to dimension members A1 and A2. Now one of those happens to be on my distribution, but not all of them. And more importantly, because we haven’t explicitly distributed based on our dimension key, we don’t KNOW which dimensions are on which distribution. So what ADW does in this case is grab all records from every distribution and broadcast it to every dist. Now we’re absolutely sure that each distribution has all the records it needs. This is fine when the tables are quite small, but if the dimension was a large one, there could be a massive overhead involved. This operation is known as data movement, and it’s basically the key to performance tuning within Azure DataWarehouse. The entire data model design is geared towards minimising unnecessary data movement FACT – B1 FACT – B3 FACT – A1 FACT – A1 DIM – A3 DIM – A3 DIM – A3 DIM – B3 DIM – B3 Query Execution
17
Distribution: HASH Column
FACT – A1 FACT – B3 FACT – A2 FACT – A1 DIM – A1 DIM – B1 DIM – B1 FACT – A2 FACT – A3 FACT – A2 FACT – B2 DIM – A2 DIM – A2 DIM – B2 DIM – B2 How do we avoid this? Well we distribute our data based on a common column. Preferably one that’s used in the join criteria. So let’s reorganise our data based on the Dimension Key. Right, that looks a little bit better One thing to note – our data is now not evenly split across distributions – that’s a major thing to keep an eye out. Those buckets will likely perform worse than other nodes and, as we saw earlier, your queries are only as quick as your slowest distribution. FACT – B1 FACT – B3 FACT – A1 FACT – A1 DIM – A3 DIM – A3 DIM – B3 DIM – B3 Distribution: HASH Column
18
Query Execution DIM – A1 FACT – A1 FACT – B1 DIM – B1 DIM – B1
Let’s look at a single distribution again. This time, all the data we need is already on the distribution, and because our data is specifically distributed on the dimension key, we know this is always going to be true, so we don’t need to do any data movement. The distributions can all execute in parallel and we’ll see a faster return of data. That, for now, is all I wanted to say about data distribution, but if you’re going to take anything away from this talk, it’s that data distribution is absolutely key to getting performance right on your SQL Datawarehouse. FACT – A3 FACT – B3 FACT – B3 DIM – A3 DIM – A3 DIM – B3 DIM – B3 Query Execution
19
SELECT DIM.NAME, COUNT(*) FROM FACT
DIM – A1 FACT – A1 FACT – B1 DIM – B1 DIM – B1 FACT – A2 FACT – B2 FACT – A2 FACT – A2 DIM – A2 DIM – A2 DIM – B2 DIM – B2 So hopefully that’s what’s going on across all distributions in unison. Lots of queries happening at once = quick, efficient SQLDW work! SELECT DIM.NAME, COUNT(*) FROM FACT INNER JOIN DIM ON FACT.KEY = DIM.KEY GROUP BY DIM.NAME FACT – A3 FACT – B3 FACT – B3 DIM – A3 DIM – A3 DIM – B3 DIM – B3
20
Distribution Candidates
>60 Unique Values (>600 preferable!) Even Distribution Used in JOIN criteria Not used in WHERE criteria Based on that knowledge, we’re after the following factors for a good distribution: We want more than 60 unique values – but more than this, we want LOTS more. More than 600 at least, would be preferable. We want the values to be evenly spread. If one of our distribution keys has 10 records with that key, but another has 1000, that’s going to cause a lot of discrepancy in distribution performance. Next, the biggest reason for data movement to happen is where joins are occurring. Look at the queries your users are going to be issuing commonly, the keys used in those joins are going to be candidates for distribution Finally, if users commonly filter using your distribution key, this is going to drastically reduce any parallelism. If it happens now and then, it’s not terrible – but if it affects the bulk of queries then it’s going to be a real performance drain.
21
100 DWUs Control Compute 8 Readers 8 Readers, 60 Writers
This is what they’ve done. Azure DataWarehouse still has your single Control node looking after everything. We’re also given this idea of DWUs. That’s a “data warehousing unit” – an abstract term Microsoft use to describe CPUs, memory, disk etc as a convenient metric. You’ll see a similar idea with Azure SQL DB’s DTUs. But what’s actually under the hood for 100 DWUs is a single compute node, a quarter rack, just without the distribution nodes. So it’s 8 readers that can access our 60 distributions. We have 60 fixed data writers, as they’re fixed to the distribution nodes themselves. When we scale up… 8 Readers, 60 Writers
22
200 DWUs Control Compute 8 Readers Compute 8 Readers
We simply add another computer node, giving us another 8 readers. We can now be accessing twice as many distribution nodes in parallel. If we’re still struggling, we can lift it up again as previously 16 Readers, 60 Writers
23
400 DWUs Control Compute 8 Readers Compute 8 Readers Compute 8 Readers
There we go, we’ve now got 32 readers against our distributions. This scaling takes only a minute or two and affects the whole server. But the real key thing to note is that you’re charged on an hourly basis at the highest level used. So if you ramp it right up to 2000DWUs, then back down to 100DWUs 5 minutes later, you’ll still be charged for a full hour of 2000DWUs. Scaling with Azure Datawarehouse is something you plan in advance to cater for expected peaks and troughs in your usage patterns. Compute 8 Readers 32 Readers, 60 Writers
24
Data Loading
25
SSIS ADF BCP Control Compute Compute Compute Compute
Any “push” technologies currently have to go through the control node. Essentially you’re sending record inserts and the control node needs to figure out what to do with it. SSIS is the slowest option here, with Azure Data Factory being slightly faster and BCP being slightly better again. But they’re all throttled by the control note. What this means is, there’s an upper limit to performance of a single stream of data, and the number of compute nodes we currently have doesn’t matter in the slightest. We physically can’t make that one stream go faster.
26
SSIS Control Compute Compute Compute Compute
We can parallelise, and gain some performance improvements. Running multiple SSIS packages can increase your data throughput, but only to a point, there’s still a physical upper limit because of the reliance on the control node. There is, however, one more data loading approach which does not have this problem.
27
PolyBase Control Compute Compute Source Files Compute Compute
Polybase! You’ve probably heard a lot about it, there’s a big fanfare around today and tomorrow as it’s officially coming as part of SQL Server 2016 in a couple of weeks. It’s a hadoop based tool, essentially a special external data reader used to read flat files held in HDFS (that’s hadoop file system, essentially a distributed array of cheap storage that can be queried in parallel). In practice, it provides something known as an external table – so rather than inserting data directly, it allows you to query flat files from within your database itself, essentially you’re pulling the data into the engine. How does it scale? Well each compute node not only has it’s 8 internal reader, it also has 8 external readers in the form of a HDFS bridge. Therefore if you add more compute nodes, you add more external data readers. If you have a lot of data to bring into the Azure DataWarehouse, add more compute nodes and it will scale. There are one or two minor restrictions here – if your flat file is zipped, it can only be read as a single thread, otherwise polybase can read a single large file in parallel, which is pretty cool. There’s also only support for certain file types – but essentially any kind of delimiter can be used, so it’s generally good! PolyBase
28
WHERE? That’s it with the basics. That’s what Azure Datawarehouse is and roughly how it works. Based on that, you should be able to make some reasonable decisions around when the technology is appropriate (hint: Large amounts of data, aggregate/analysis workload) Azure SQLDW has a hard limit of 1024 concurrent connections, of which there can be only 32 concurrent queries. So if you’re looking at systems with very high user volumes, this might not be the best option. There is also the issue of the 60 distributions – if your datasets are small, the overhead of data-movements and aggregating the results from each distribution may be more than the benefits you would gain from the parallel technologies. If you’re seriously considering Azure SQLDW, you’ll only really see benefits from upwards of 1Tb of data. So consider it if you have: Large Datasets Analytical/Aggregation Workloads Small userbase
29
HOW? So, time to consider the core principles you need to bear in mind when thinking about performance. Now we’ll dive into some more specific tools at your disposal and the techniques I’d recommend to get the most out of the system.
30
ELT > ETL First things first! ELT not ETL.
What does this mean? Well in the general sense, each part of a BI workload involves taking data out of a datastore, changing it and placing it into another store. Extract, Transform, Load. The important semantic difference here is the tool that performs the transform. In traditional ETL tools, such as SSIS, the tool that performs the extract and load also performs the transformation. This is generally the most efficient way of doing things, with SSIS performing in-memory pipeline transformations to avoid I/O bottlenecks. With MPP systems, it is generally more efficient to use the parallelism power of the box than other methods. With Azure SQLDW specifically, we want to avoid pushing data at the box, thus going through the Control node.
31
ETL SSIS SSIS This rules out SSIS as a data movement tool – in fact the most efficient way to do things is via SQL directly. Each individual data movement should have a stored procedure containing the transform logic, this ensures the work is performed by the SQLDW box itself. ELT
32
reate able C T A S s Create Table as Select – a foundation of the ELT design. This is a special type of statement which is minimally logged. Essentially – it’s the fastest way to get data into a table. So if you have a table there already and you want to wipe it and fill it with data, the fastest way to do so is to delete the table and recreate it using a CTAS. The table it creates infers the schema from the query, so you need to be explicit with ISNULLs, CASTs and anything needed to point the query in the right direction. If you’ve ever used “Select field, field2 INTO #TABLE From SourceTable” then you’ll have a pretty good idea of how it’ll work. But if you ran the two side by side, CTAS would perform faster as, again, it is specifically treated differently. So it’s a little weird – in between loads your transient tables could actually be dropped ready to be recreated by your stored procedures. My database project doesn’t have any table definitions in there for these intermediary tables. Alrighty – so we’re using CTAS in an ELT structure, we can just sit back and kick off the load now right? Nope… elect
33
Resource Classes Resource classes are super important.
Essentially, the warehouse has a tiered approach to resource governing, and by default you’re going to be constrained. If you’re not paying attention to resources classes, then it can be as big a downgrade in performance as filtering your distributions. So how does it work…?
34
Max concurrent queries Concurrency slots allocated
DWU Max concurrent queries Concurrency slots allocated DW100 4 DW200 8 DW300 12 DW400 16 DW500 20 DW600 24 DW1000 32 40 DW1200 48 DW1500 60 DW2000 80 DW3000 120 DW6000 240 Firstly, you’ve got these two ideas – concurrent queries and concurrency slots. The queries speak for themselves, you can only be running so many things at once. The slots however, are a little differnet. A concurrency slot is essentially a unit of power, of resource. It’s a combination of memory, compute, essentially your ability to execute. You can see that as you get to the larger sizes, there are more concurrency slots than concurrent queries. You can still only ever run 32 queries at once (everything else goes into a queue), but you can allocate multiple slots to a given query. Essentially, you can give queries more power.
35
Maximum concurrent queries Concurrency slots allocated
DWU Maximum concurrent queries Concurrency slots allocated Slots used by smallrc Slots used by mediumrc Slots used by largerc Slots used by xlargerc DW100 4 1 2 DW200 8 DW300 12 DW400 16 DW500 20 DW600 24 DW1000 32 40 DW1200 48 DW1500 60 DW2000 80 64 DW3000 120 DW6000 240 128 This works via the resources classes. This is allocated on a user basis, so be careful – if you mark someone as a larger class, then ALL of their queries get treated with priority, and it’s very easy to take up the whole system’s resources without realising. There are a few exceptions – querying system tables and performing DDL, that kind of thing is automatically treated as a small, but you can see the xlargerc class – you can basically have 2 of them running at once and nothing else, even when your warehouse is scaled up to the max. To give you an idea of what this actually means, we have a breakdown of memory allocation per distribution…
36
DWU smallrc mediumrc largerc xlargerc DW100 100 200 400 DW200 800
1,600 DW500 DW600 DW1000 3,200 DW1200 DW1500 DW2000 6,400 DW3000 DW6000 12,800 Small is… well… small. Even as we scale up, you can only ever have 100mb per distribution. This is useful for managing concurrency, making sure users aren’t killing the system for each other. By default, this limit is placed on all users. However, if I want to run my ETL, it might take a fair while if I’ve only got 100Mb per distribution, especially if I’m performing any kind of complex calculation. But it gets worse. It’s not just a slow load that might happen, the actual quality of the data is going to suffer. Why you ask? Because we’re working with columnstore.
37
smallrc largerc When inserting into a columnstore index, rows are placed into a delta store until a certain trigger limit is reached, after which is collects them all and compresses them, placing them into memory. Because of the types of compression gains made from columnar indexes, you tend to get better compression performance over larger datasets – things like repeated values and dictionary lookups are great here. But if we don’t have a lot of memory, the deltastore has to create columnstores more often. We end up with something like this. Whereas if we have a large class associated with our ETL user, we would see something more like that. Now that’s a terrible oversimplification, but hopefully it clarifies the point – make sure you are running your data processing using a higher class of user and you’ll get performance gains, not just during the load itself, but in the resulting warehouse structure. Ok. Phew. Our data is loaded, it’s efficiently compressed, it’s distributed properly. How’s performance going to be? Terrible. We’re still going…
38
Statistics Now statistics are easy to forget. Mainly because most systems do it for you automatically. However, we’re not working with most systems – Azure SQL DW does not generate statistics automatically, nor does it update them automatically. Remember when we were looking at data movement, well in that case it would only have moved the dimension table if it knew that was the smaller table. If we don’t have statistics, there’s a chance it’ll decide to perform data movement on the fact table instead. Imagine moving 100,000,000 rows to each distribution just to satisfy a user’s one-off query! So you need to generate stats, and update them, probably each time you’re doing a load.
39
doesn’t exist within SQLDW. It makes sense that this is harder than normal – we’re not dealing with a single row count but instead 60 rowcounts, one from each distribution. This means we need to find another method of capturing rowcounts when we’re building an enterprise-grade BI system. This is done via a small query on the SQLDW system views, I have wrapped this as stored procedure which is called at the end of all data movements: ALTER PROC [bigint] OUT AS BEGIN BIGINT=( SELECT SUM(sqlr.row_count) AS row_count FROM sys.dm_pdw_sql_requests sqlr WHERE sqlr.row_count <> -1 AND sqlr.request_id IN (SELECT TOP 1 request_id FROM sys.dm_pdw_exec_requests exer WHERE exer.session_id = SESSION_ID() and exer.status <> 'Running' -- exclude this statement ORDER BY exer.submit_time desc, exer.request_id desc)); SET END
40
Surrogate Keys Identity Columns and Sequences – again these are hard to do. Technically each distribution is working in ignorance of the other, this means that there is no way for them to determine the next value in a sequence or the insertion order of a bulk load. We therefore have to code defensively against such things. A simple example for this would be to use: bigint = (SELECT MAX(DimKey) FROM dbo.MyDim) INSERT INTO dbo.MyDim (ID, COLUMNS) SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS ID, COLUMNS FROM etc etc
41
Explain EXPLAIN is very much our friend. Another difference from the ways of the part – there is no such thing as the actual or estimated query plan view within Azure SQLDW. However it obviously does use a plan – and it’s very useful for us to delve into it to explore what happened. To use it, simply type the keywork EXPLAIN in front of your query and run the statement including it. This will return the estimated query plan as an XML string. If you’re querying using Visual Studio, you can click on this to open up a new window with a neatly formatted view of your query plan. SQL Management Studio, unfortunately, just returns a single line string of the query plan at the moment. The plan itself is outside the scope of what we’re talking about here – that’s a few hour session in itself! But it’s very useful to be aware of the key details: BROADCAST MOVE and SHUFFLE MOVE represent data movement and are generally best avoided. Take a look at your plan – if you see these actions related to very large tables, there is a good chance that the query is not taking full advantage of your distribution and there may be ways to improve this Check the overall number of steps – you may be able to re-write the query to optimise the execution. Aggregate functions, for example, need the full dataset to be passed up to the control node for processing (ie: We can’t work out the “average” for each distribution, we need the full dataset to do this), but maybe we could perform some processing first to minimise the data we require for this.
42
Orchestration Orchestration is a tricky one – like Azure SQLDB, there is no out-of-the-box orchestration solution for using Azure SQLDW. There are several options available but none of them are perfect. Our options are: Azure Data Factory – For me, this is the least mature option. If you’re doing very simple operations, or just getting data into the warehouse for later processing, then it’s a solid tool, but for more complex ETL tasks, it starts to fall down. I need a little more control and more complex execution paths for the majority of my BI solutions Azure Automation – For the scaling up/down/pausing/resuming tasks, you can’t beat Automation Runbooks. It’s essentially a PowerShell PaaS solution, so you write powershell and have it run as a cloud service with no overheads and no requirement for a server. It’s pretty straight forward to set up schedules which will automatically stop/start your Azure SQLDW (which you CAN do, unlike SQLDB!). Running data movements using Automation runbooks may be a little code heady though, so I stick with it just for infrastructure-level changes SSIS – the old favourite. For me, it is still the most powerful ETL management tool, despite the downsides of having to create an Azure SQL VM for it to sit on. This sounds expensive and unwieldy, true, but using Automation, you can have the box turned off for the majority of the day and simply turn it on when you need to run a job. As you can pay for SQL licences on a run-rate basis with Azure VMs, this is actually a remarkably cheap option, we can also run a relatively light box as we have no need for data movement overheads, we’re simply using SSIS as a straight orchestration tool, kicking off stored procedures on SQLDW in a specific order and managing the process flow.
43
Monitoring Finally, monitoring.
There is very little out of the box – all the old favourites such as Activity Monitor and the baked-in SQL Management Studio reports are not supported with SQLDW (ie: don’t exist). However, the DMVs behind them are all still there (although named a little differently for a few items). This means we can easily put together a suite of SSRS reports that provide the same functionality, or even a PowerBI overview report. At Adatis, we have a suite of monitoring reports that we can now use for this purpose, making the development process much smoother. I’d suggest you spend some time at the start of your project getting these tools in place – it’ll make the whole experience much smoother!
44
Please give us your feedback:
sqlrelay.co.uk/feedback So that’s it – apologies for the readers who couldn’t see the code demos! Thank you
45
Thanks for Listening Simon Whiteley @MrSiWhiteley
We can’t stress it enough – please reach out and get in touch with any ideas, comments, disagreements. Obviously, I’d be delighted to see if there’s any way Adatis can help you achieve your BI and Warehousing goals as well! Thanks for Reading!
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.