Posts Tagged Sharepoint

Talking about Using the SharePoint Business Data Connectivity Designer in VS 2010 | Visual Studio Team Interviews


Introduction

At work  I’m starting to build a BI Portal in SharePoint 2010.

Getting SharePoint 2010 installed has proved to be a bit of a trial. The main source of “aggravation” was in the installation of SQL Server 2008 R2. That was probably cause by a limited “admin” ID in the work production domain. I’ve not tried to explore what’s missing, but I suspect that there will be some privileges which are  missing.

I’ve yet to install PowerPivot, and see how that works with SharePoint 2010. That installation another “adventure” I’ve in store for later this week(probably next week).

One of the elements I need for this “BI Portal” is a (SharePoint) list of the Analysis Services Cubes which are going to be part of the Portal. That’s a “long” list, and best surfaced as a dynamic link into the Analysis Services Metadata (AdomdConnection.GetSchemaDataSet will list Analysis Services – Databases [Catalogs in ADOMD speak] and Cube).

To do this one needs to used a “Business Data Connectivity” (BDC) object (.NET Code which “pokes” external data into SharePoint).

The following links to the Channel 9 MSDN are highly recommended. The first video is all I needed to get up and developing a BDC (I’ll soon find out what’s missing). The second is a good series of tips about deployment and debugging BDC’s.

Quote

Using the SharePoint Business Data Connectivity Designer in VS 2010 | Visual Studio Team Interviews

Tips & Tricks with Business Data Connectivity Model Deployment in Visual Studio 2010

Technorati Tags: ,,,,,,,,,,,,,,,,,,,,

Windows Live Tags: SharePoint 2010,Visual Studio,BI Portal,installation,SQL Server 2008 R2,PowerPivot,Analysis Services,Cubes,Metadata,AdomdConnection,GetSchemaDataSet,Databases,Catalogs,ADOMD,Cube,Channel 9,MSDN,deployment,Tips,Tricks,Model

Digg This

, , ,

Leave a comment

Microsoft Analysis Services, and more


Introduction

This post continues the series on the Data Warehouse, product retirement project I’m currently leading. The previous post in this series was Descending Into Microsoft Analysis Services.

Update – Where We Have Reached

  • The Analysis Services Object model traversing program has made a migration to a Windows Forms Tree Viewer. The program is a “lazy” evaluator, in that it only traverses an object (or collection) when the user selects the type. This makes the thing pretty quick to use. It is also all driven by reflection, so in theory could be applied to any object structure. That is a feature which I’ve not implemented, but I may do some time in the future. I should probably should refactor the code and make a subclass of the TreeView with the reflection code as a “custom control”.
  • The define a “Cube” from  the XML, and SQL Tables, is working well. A couple of refinements, have been made to it. These have included:
    • Getting the display order working better. It’s not perfect, in that to do that we would need to add a presentation order column to all of the SQL tables which are used to build the dimensions.
    • Reusing dimensions across “Cubes” which are built from the same SQL table. One little step forward for better data management. There still is a long way to go with improving the data management. But, almost all of those improvements will have to come in subsequent projects.
  • The data comparer has progressed, but taken a detour. I’ll cover that in the “dancing goal posts” section (next).

Dancing Goal Posts

There are, or have been, a couple of “extra” requirements for the project. These have included:

  • A “data salvage” aspect to the project. There are a number of years worth of data which we are going to salvage from the existing system. Hence, the data compare function, has taken on board an extra requirement to extract all of the base data from the existing data. This is a detour which may lead to a couple of choices needing to be made. These choices are:
    • To pour all of the data into one “Cube”. This exposes a number of issues:
      • The “cookie cutter” which builds the SSAS cubes would need to be enhanced to add a time dimensions (to hang the different “cubes” of data from).
      • There could be revision to the data, which has resulted in different “cubes” being created. Developing a dimensional model which caters for this as well could be a “bridge too far” for this project.
      • There have bee changes to the dimensions over time. These changes have just resulted in different “cubes” with different labels sets, in the old system. Building up an approach which implements “Slowly Changing Dimensions”, again could be a “bridge too far” for the project.
  • My Analysis Object Viewer many well need some enhancements. To get a better understanding of the scope of the dimensional issues, which are mentioned above. These enhancements could include:
    • Creating a cube for each of the salvaged datasets. This would be done by the existing “cookie cutter” we have for creating “Cubes”
    • Building something visual to show the dimensionality of each of the cubes. Would need to get into something which gets me a “table like” view of the dimensions. Would need to have hierarchy, and code set information as well. Probably a big job.  Or,
    • Build something which selects the set of cubes, then uses LINQ over the object model to determine the following:
      • Sets the “time 0” cube then runs time forward from there.
      • Identifies “New Dimensions”. These would be dimensions which did not exists in the”time 0” Cube.
      • Identifies Changed Dimensions. There are a number of classes of change which will need to be detected. These include:
        • New Codes, or Labels
        • New Levels
        • Different Hierarchy
        • Differences in the Key pointing to a Label.
  • The data presentation layer for the project needs to be sorted through. There are a number of alternatives here which include:
    • Excel, and the Pivot table. I’ve already pointed Excel at some of the Cubes we have built. They did work OK. I’m not sure how good an Excel solution will work with some of the low bandwidth sites that this solution will need to support. Hence, the next couple of alternatives.
    • SharePoint, with the Excel parts. I’ve never explored how this works, so it will be a “voyage of discovery”. This solution has a number of the benefits:
      • it may work better in the low bandwidth sites,
      • it may allow delegation of the management of the security of the cubes to clients,
      • it may provide a better environment for doing “big” analysis jobs.
    • SharePoint, with PowerPivot. A new thing from Microsoft. I’ve yet to see it in action. So, I’ve no idea what it gives above and beyond standard Excel functions.
    • Reporting Services. This may need to be part of the mix, just for the routine reporting requirements, or maybe more wide spread. Again, sight unseen, I’m not sure what this will give to the mix.

Conclusions

This is an ongoing “voyage of discover”. I’m positive about the prospect of getting something which satisfies the “project brief”. What will need to be done in the future is something which we will have to put on the “to do list” for sometime in the future.

Digg This

, , , , , ,

Leave a comment

Descending into Microsoft Analysis Services


Introduction

One of the projects which I’ve on the go at the new job is migration a collection of reporting function into the Microsoft way of doing things. There is a tight “timebox” on this project, and a diversity of reporting “cubes” which need to be migrated into Analysis Services Cubes.

The other constraint on the project is that the new Cubes should be a direct replacement of the original reporting solution. There should be no “enhancement” of the solution (to start with at leats). If we get the project to the finish line early, then there could be an opportunity to do some dimensional modelling on the implementations an “tidy up” the resulting Analysis Cubes.

The Strategy

My plan is a four pronged attack on the problem. Those prongs are:

  1. Use the xml file, which is used to the data (and metadata) into the current system, to defined the Analysis Cubes. The developer I’ve working on this is making good progress building a program which is defining a Cube programmatically. This is using the .Net object which Analysis Services for  .Net (Namespace: Microsoft.AnalysisServices  Assembly: Microsoft.AnalysisServices (in microsoft.analysisservices.dll)). Defining the translation between in existing products conceptual model to the Analysis Services model will come next week. Then we will start to see how this could come together.
  2. A program which will traverse the object model and report all of the properties of a cube. Currently, I’ve a version of this which  produce text file. Moving it to a WinForms interface could be an option for next week, or I may “put it to bed” at it stands.
  3. A program which will compare the results of the same queries into the 2 product programmatically. I expect that the developer building this will make some good inroads into this over the coming week as well.
  4. “Sweeping up the loose ends”. I’m uncertain for all of the things which will end up in here. There is a bunch of documentation on the continued operation of the solution, already in this bucket.

Conclusions

  1. The object model for the Analysis Services looks good (so far). It would appear to allow these type of interactions will the product.
  2. There are a number of issues which the project will need to address as part of the migration which are not immediately part of the translation process, but part of the bigger “making it work well” process. These include:
    1. Storage management. I’ve not delved into this part of SQL Server and Analysis Services. These Cubes will consume space, and how that space is managed will be an issue to manage.
    2. Performance tuning. Another part of SQL Server and Analysis Services Cubes which is unknown at present. The generation of aggregates which are commonly used (or maybe all rollups), the choosing an approach and storage management will be an issue.
    3. Instrumentation. What we currently get out of the systems, and what we should be getting (to make things more robust) is another open question.
    4. Integration Services. It is unclear at present how much new ETL will need to build to deliver an automated solution. There is some ETL processes in existence which is used in the current implementations. How much the migration will need to modify is yet to be seen.
Digg This

, , , , , ,

Leave a comment

%d bloggers like this: