Welcome to Two Rivers Consulting!
We are a consulting firm dedicated to data, and a rapid return on investment by the judicious use of technology coupled with an agile process and methodology. We believe in finding disruptive technologies and processes that allow us to produce breakthrough results for how your company gets insights from its data. After all data fuels your business and if you are not making data driven decisions you most likely are, or will be at a serious disadvantage to your competitors who do.
If done properly moving at least your elastic workloads to the cloud can save money! If you are a startup and need to be able to grow your capacity as your company grows the cloud may be where you want to start. In other words Cloud First.
The Corporate Data Warehouse is a fundamental mainstay of our corporate IT strategy. And they can get stale. The hardware and software you initially implemented my now be ready for upgrading or retirement..
Good data architecture is important from multiple perspectives. First a picture is worth a 1000 words and a logical model of the things we are are using in our business and their relationships to each other is critical for communicating with our colleagues, customers and vendors.
Big Data/Data Lake
Today more and more data is available to us. If we have a business strategy that incorporates big data we need an infrastructure that lets us use that big data for competitive advantage. We do the data engineering so your data scientists can focus on the business problem and not wrangling the data.
We have put this list together a couple times over the years. This one is adapted from one we did for a client several years ago. This particular client had a lot of .NET programmers who had been pressed into double duty as database programmers. We have seen these types of things in many different environments.
This was written primarily from an Oracle perspective but many of the concepts hold for Microsoft SQL server and most databases I have worked with.
My plan is to edit this to include other items observed since I put this list together.
1. Unnecessary use of Distinct – Forces Oracle to sort and de-duplicate. Expensive on large record sets. If you know your data and have unique indexes don’t use distinct.
2. UNION instead of UNION ALL – Very similar to the distinct issue forces oracle to sort and de-duplicate if your set is unique already don’t make Oracle do the extra work.
3. LOGGING instead of NOLOGGING for Operational Data Store (ODS), Data Mart (DM), and Materialized Views (MV) that have no System of Record information. – This causes more disk io than necessary and takes up space in the archive logs, and it takes more resources to backup this redundant information.
4. Logging indexes – very similar to 3, there is no data in an index that can’t be recreated, logging takes extra i.o. and backup space.
5. Excessive or complex use of db_links.
a. Referencing table through a db link takes network io.
b. Complex sql involving links is more difficult to tune and sometimes causes Oracle 600 errors. Where possible architect to reference a table on database you are on for intense queries. Where possible reserve db links for small simple selects and materialized views.
6. Failure to use direct path when viable.
a. Look at using /*+ append */ hint in big insert statements.
b. When loading external data with sqlloader use direct=true to use direct path.
7. No use or mis-use of Oracle Parallel Query.
a. If cpu is underutilized, parallel query can provide big improvement on queries that do full table scans of large tables. Or when inserting a lot of data into tables.
b. Tune query before using parallel query. Don’t add parallel to a badly tuned query.
c. Don’t overdo it. Parallelism of 2-6 is generally plenty. Test your query with parallelism of 2, 4, 6 etc. generally you will see very little improvement in execution time between 4 and 6 or higher. Every rule has exceptions.
8. Missing indexes or over indexing. Self explanatory?
9. Bit map indexes on OLTP tables. – Oracle locking mechanism locks TOO many records with bitmaps indexes. Can cause issues on when used in OLTP environment.
10. Indexes missing on FK columns. Many times we look up records based on the value of a FK record. If so index will probably help.
11. Failure to use incremental/Fast Refresh Technology – Use of change data capture to propagate just the changes. Can be much faster than full refreshes if data changes are small compared to full volume. More complex to set up, but frequently worth it.
12. Materializing a view instead of tuning or going to the ODS/DW for info. – For a variety of reasons some queries were difficult or impossible to tune so MVs were used as a band aid. Sometimes they were placed in a less than optimal location.
13. Misplacement of data objects. Place so dependencies are minimized. Or at least flow in one direction.
14. If you must truncate and reload a large table with many indexes, most times it will be significantly faster to drop the indexes load the data and re-apply the indexes. Remember to use /*+ append */ hint.
15. Failure to log metadata. Need to capture good metadata about your jobs and any errors. Makes troubleshooting easier, and you can’t manage what you don’t measure.
16. Failure to gather statistics or failure to correctly gather statistics. This one is so basic it probably belongs at the top of the list. In order for Oracle’s cost based optimizer to make smart decisions about how to access the data needed to fulfill a query, it needs to know some information about the size and distribution of the data in question. This is particularly critical as the complexity of the query increases. This subject could consume it’s own blog post. And I assume there are many articles if not whole books already written on this subject so I will just touch on some high points.
Statistics should be gathered on tables and indexes “anytime there is a significant change” to the data. In an OLTP system, the data changes are a function of end users using applications and doing business. It might take several days, weeks or even months for the changes to a table to become “significant”. So many DBAs will set up a weekly batch job to update statistics on any of the tables that have significant changes. Oracle can keep track of changes so it will know. However on reporting systems that are loaded in batch modes, best practices dictate that the ETL Architect/lead developer, should design in a mechanism to track the quantity of the changes and if necessary gather statistics immediately after updating a particular table. Some common mistakes I’ve seen over the years. a) No stats gathered at all. b) Stats gathered at inappropriate and intermittently catastrophic times. DBA sets up a weekly script that works fine most of the time, except one week an off cycle batch job truncates tables right before the stats job runs, then loads the table right after the stats job gathers stats on an empty table. Monday morning nothing works because Oracle thinks the tables are empty. c) Over collecting. Either too often, or gathering stats on too much of the data. Use auto sample size most of the time gets the right amount of data sampled.