Derived Styles based on unnamed default Styles

This seems so obvious in retrospect but I just learned something very useful today so I thought I’d put it out there just in case anyone else wasn’t aware of this.

Specifically the problem occurs when you create a style in Xaml that is supposed to be the default style for a particular type of Control. For example:

<Style TargetType="{x:Type ComboBox}">
</Style>

This will allow you to create a style for all ComboBoxes. If a ComboBox does not specify a specific Style it will get this one automatically. The tricky part comes when you want a single particular ComboBox to deviate from the style just a little bit. So typically what I do is create the main Style that is named then create the default Style to be based on that main one like:

<Style x:Key="ComboBoxBase" TargetType="{x:Type ComboBox}">
</Style>
<Style TargetType="{x:Type ComboBox}" BasedOn="{StaticResource ComboBoxBase}">
</Style>

Then if you make any further custom styles they can just be based on ComboBoxBase and change as necessary. While this works I just found a way that is potentially cleaner. It turns out that you can pass in a Type to the StaticResource instead of a key and get at the default Style for that Type. Very handy, like so:

<Style TargetType="{x:Type ComboBox}">
</Style>
<Style TargetType="{x:Type ComboBox}" BasedOn="{StaticResource {x:Type ComboBox}}">
</Style>

How did I not know this sooner? This way you put all of your main styling into the official default Style, you also don’t need to remember the name of that default Style to use it.

Disclaimer: I haven’t tried this on Silverlight, only WPF.

Actipro has a WPF MGrammar syntax editor

http://www.actiprosoftware.com/Products/DotNet/WPF/SyntaxEditor/Default.aspx

I haven’t tried to use this yet but it seems pretty interesting.

SyntaxEditor is a powerful text editing control that is packed with features for efficient code editing, including syntax highlighting, line numbers, block selection, IntelliPrompt UI, split views, zooming, bi-di support, and much more. It has many of the same code editing features found in the Visual Studio code editor.

SyntaxEditor is built on top of our next-generation extensible text/parsing framework. While over 20 sample languages are available to get you started (such as C#, VB, XML, and more), custom language definitions can be developed and distributed with your applications as well. SyntaxEditor is designed for use in IDE (integrated development environment) applications, however there are many other applications out there than can take advantage of such a control.

A free add-on is included that integrates domain-specific language (DSL) parsers created using Microsoft Olso’s MGrammar with SyntaxEditor, allowing it to syntax highlight code based on the DSL parser.

Photoshop Import in Blend 3 – by Janete Perez

Janete Perez has created a great series of blog posts on importing Photoshop files into Expression Blend. I have been apart of the Expression team for almost a year now and I have been working on the Photoshop Import feature, so these posts are especially interesting to me.

http://blogs.msdn.com/janete/archive/2009/05/27/photoshop-import-in-blend-3.aspx

Terminator Salvation = awesome

If you haven’t seen it yet definitely do it as soon as possible. It was a great addition to the terminator series and there wasn’t anything for me to complain about. It was non-stop action and very intense. I had high expectations and this movie didn’t let me down. My only complaint was that the previews probably told me more than I would have like to know, I wish they would have cut down on the revealing plot sections for those previews.

I think what I especially liked was the organic seeming movements of the T600’s, the CG was nearly flawless. And I also noticed a few really long single take scenes like Children of Men. It was definitely a sweet movie.

http://www.imdb.com/title/tt0438488/

Staged Pipelines

In an effort to make the MetaSharp pipelines more powerful I’m about to add the concepts of stages and connectors. I’ve been thinking about it a bit and I drew up some diagrams to help me express how the pattern should work.

At a high level it’s pretty simple, for every pipeline there are multiple stages and for each stage there are multiple steps. Each stage has 1 or many input connectors and 1 or many output connectors, which connects to the next stage of the pipeline.

image

With this in mind there are four possible types of stages, defined by their input and output connectors. Stages must be chained together with matching input and output connections. You want multiple types because there are certain types of operations that are simply not possible to do simultaneously but there are other types that are completely isolated and are perfectly acceptable to run asynchronously.

image

Many to Many

For each type of input a complete inner pipeline of steps is created. Meaning each input value from a previous stage will be processed by the same steps. Each inner pipeline will run asynchronously and should not communicate between each other. The stage will complete when all steps have completed running.

image

1 to 1

This type of stage will accept one input value and produce one output value. It will create exactly one chain of steps and execute synchronously.

image

1 to Many

This type of stage will accept one input value and have exactly one chain of steps but will produce many output values.

image

Many to One

This type of stage will accept many values and run them all through exactly one chain of steps.

image

 

From this I should be able to make any type of compilation pipeline imaginable. For example a typical pipeline might be something like this:

  • Parse files
  • Combine AST
  • Resolve References
  • Generate Assembly

In which case you might end up with the following stages:

  • M:M, Parse files all at once
  • M:1, Combine the ASTs into one tree.
  • 1:1, Resolve and transform the tree.
  • 1:1, Transform into IL

You could also imagine that last step transforming into multiple objects or multiple files or something like that quite easily. Also the good news is that I think this shouldn’t actually be that complicated. The pipeline simply deals with connecting stages and each stage has a very simple strategy for processing the steps. The real work will lie in the implementing the stages but even then each stage is completely modular and singularly focused.