I have an Analysis services database with several cubes in it. We have a nightly job that extracts data from our source database to some staging tables, and another that performs a "Process Full" of the entire SSAS database. After these jobs are complete, one particular cube has a dimension that contains 6 elements, all of which are incorrectly labeled. (It's supposed to have values "M1 - name1" through "M6 - name6", but instead it gets "M1 - " (code, but no name value) and then 5 " - " (neither code nor name).
I can fix this by doing a "Process Update" of just that dimension, followed by a "Process full" of the cube that uses the dimension; the dimension will then contain all the expected labels.
What I don't understand is why the database-wide "Process Full" is mucking up the dimension in the first place and why a "Process Update" gives me a different result.
I read on a Microsoft SQL BI blog that a ProcessUpdate
does not discard the dimension storage contents. It applies the changes in a "smart" manner that preserves the fact data in dependent partitions.
but since these processes are being run in sequence with no changes to the cube and no changes to the underlying data in between, I still don't get how the Update can produce changes to the dimension.
This process is completely repeatable: any time that I just process-full the cube, the dimension gets mucked up. Process-update the dimension and then process-full the cube, the dimension is good.
(a) How can the Process-full produce different results than the Process-Update on the dimension when they are both operating on the same data?
(b) Is there a better way to get to my desired cube state than tacking on steps to my nightly job to do the "process-update dim/process-full cube"?