Sunday, May 28, 2006
Windows forms designer cannot do [...]
I should have expected it: windows forms designer in Visual Studio 2005 cannot load controls derived from generic classes. In addition, of course, to not being able to load controls derived from abstract classes. Well, I know it's pretty hard to do a two-way synchronization between generated code and design surface and cover all the variations in the way classes can be created, but Microsoft could at least stop fooling us into thinking everything can be done easily in VS. Developers should really be aware of the choice they have to make: if you want to have a serious architecture in your software, forget about design-time functionality. Design-time is for VB developers: if you want it, you're doomed to casting types (instead of using generics) and throwing a NotImplementedException (instead of making a member abstract). In my opinion developing design support for generics and abstract classes was more important than building an NUnit-clone into the IDE, but Microsoft management's priorities seem to gravitate towards stuff that makes for cool screenshots in feature lists. But what really frustrates is that MS has written volumes on software design, coding standards, practices etc. and has a number of tools that enforce them, only to be let down by inadequacies in the form designer, the one thing that was invented ages ago.
Wednesday, November 16, 2005
How to invoke a method AFTER an event handler in .Net
BeginInvoke! There's a magic word for you. It sounds like black magic but works like a charm. It's nothing new, but it stayed hidden in the multithreaded realm and I've avoided it for long although it popped up every now and then to save the day.
Specifically I'm talking of .Net windows forms programming, and the Control.BeginInvoke() method. The MSDN library says that BeginInvoke is used to execute a call asynchronously on the form's primary thread: big deal. The real stuff is hidden behind the word "asynchronously"...
What happens if you call BeginInvoke ON the primary thread? The call is just queued in some kind of a request queue (it's the good old message queue). The form will finish the event handler that it's currently executing and then call our method!
So: inside an event handler you decide you want to do something after the handler finishes? No problem, just BeginInvoke() the method that does your stuff. Here's two illustrations that spring to mind:
- Focusing a (grand)child control from within the form's Load (OnLoad) event won't work because the CanFocus property of the control is false (it's initialization hasn't finished yet? Whatever). Use BeginInvoke() to execute your method AFTER OnLoad.
- Changing the contents of a ListView from within an ItemCheck event handler (probably true for any other event). If you try to add items to the list, it will go mad. Just schedule your method for later with BeginInvoke().
Specifically I'm talking of .Net windows forms programming, and the Control.BeginInvoke() method. The MSDN library says that BeginInvoke is used to execute a call asynchronously on the form's primary thread: big deal. The real stuff is hidden behind the word "asynchronously"...
What happens if you call BeginInvoke ON the primary thread? The call is just queued in some kind of a request queue (it's the good old message queue). The form will finish the event handler that it's currently executing and then call our method!
So: inside an event handler you decide you want to do something after the handler finishes? No problem, just BeginInvoke() the method that does your stuff. Here's two illustrations that spring to mind:
- Focusing a (grand)child control from within the form's Load (OnLoad) event won't work because the CanFocus property of the control is false (it's initialization hasn't finished yet? Whatever). Use BeginInvoke() to execute your method AFTER OnLoad.
- Changing the contents of a ListView from within an ItemCheck event handler (probably true for any other event). If you try to add items to the list, it will go mad. Just schedule your method for later with BeginInvoke().
Saturday, October 15, 2005
Longhorn and longtooth
Some thoughts on WinFX/Longhorn/Avalon/Indigo/WinFS/WCF/Vista, what I call Microsoft "Codename" platform. I largely ignored it because most of the information coming from Microsoft starts with "1. This will boost your productivity enormously." But during the past few days I saw some interesting stuff here at Microsoft Sinergija 05 conference in Belgrade. It all looks like a good next step in the right direction, although I believe that the biggest stone around developers' necks today is not the lack of solutions in these new areas they address, but rather the deficiencies in existing areas. Typical Microsoft? Giving us a Windows 3 "multimedia" graphical interface based on an OS with 8.3 filenames? Could be. The time is ripe to clean up the computing environments. Dump the old paradigms and technologies and replace them with new ones, not just add new on top of the old.
For instance, in .Net 3.0, LINQ will make it easier to query a relational database. But I think the real problem here is the fact that the database is relational in the first place? It's an anachronism. If you create a UML model with inheritance and all the beautiful object-oriented stuff, you will have to cripple it by putting it into a 1960'ies storage technology called a relational database. How do you implement inheritance in a relational database? Can you even simulate it? It is 21st century, we should by now be using remoting to persist objects directly into the database and getting real collections returned from our queries. We should create classes instead of database tables, use database-side object methods instead of stored procedures, handle in-server object events instead of triggers.
And this is just one of the things that outlived their usefulness. How about file storage for another? One more impedance mismatch, the threshold between the object world and file systems: you can serialize objects into a file, easily done today in .Net or Java. But that's the end of your options. Can you find an object in a file? Not unless you write your own code for it. Can you query the contents of a file? To quote Michael Palin: "Er, not as such". On the other hand, Ms Access can query the data inside its files. SQL server can, too. These files are meant for internal use by the application (the user or the programmer need not have much knowledge of their existence), and that is the way it should be. So why doesn't Microsoft Word put all of its documents into a single database? Well, you wouldn't be able to copy them to floppy disks, delete them etc. All of these operations would have to be implemented by Word. Or, to put it another way: the infrastructure doesn't support it. Still, the data files are nothing but a rudimentary replacement for databases. And we're storing data in local files because we don't have local databases.
So, what's stopping us from having a small database server as an integral part of the OS? To have structured data dumped inside of it, not scattered all over the hard disk, mixed with various DLL, EXE and other system files? To be able to query it. And query it not just to find "documents containing the word XYZ" but to find "paragraphs containing italicized word XYZ", and find it not only inside Word documents but also Excel, PDF etc files - and doing it from a single query.
It's not hard to imagine, and Microsoft for one seems to be near the right idea: Office documents can now be saved as XML. If we dumped them into an XML database, we'd be able to do most of the above. Think of the possible uses: I could create notes in my Word file and then quote parts of it in a Powerpoint presentation, using references to (instead of copying) the original text so that when the original is updated so is the PPT. I could scribble additional comments inside the PPT, then create a filtered view of it (analogous to a database view) that says "keep the structure but eliminate the scribbled comments". And then set up a replication mechanism (I'd call it e-mail ;)) to have that view's data replicated (sent) to whoever I want to. This data "linking and embedding" idea is also nothing new: it looks like the things OLE always promised.
What's really important to note here is that most of the required technology is really here (or at least near) and most mechanisms are tested in practice, only not implemented everywhere we need them. We could still use OLE to embed data, just change it to store data in an object/XML database instead of a file. We could use remoting (which is still being developed but was proved to work well) to access data in a database. We'd use XQuery to query the data, maybe XSD to describe it. The next thing to do is try and replace the old technologies with new ones. Having a clean and unified environment for developers would mean 1. an enormous boost in productivity.
For instance, in .Net 3.0, LINQ will make it easier to query a relational database. But I think the real problem here is the fact that the database is relational in the first place? It's an anachronism. If you create a UML model with inheritance and all the beautiful object-oriented stuff, you will have to cripple it by putting it into a 1960'ies storage technology called a relational database. How do you implement inheritance in a relational database? Can you even simulate it? It is 21st century, we should by now be using remoting to persist objects directly into the database and getting real collections returned from our queries. We should create classes instead of database tables, use database-side object methods instead of stored procedures, handle in-server object events instead of triggers.
And this is just one of the things that outlived their usefulness. How about file storage for another? One more impedance mismatch, the threshold between the object world and file systems: you can serialize objects into a file, easily done today in .Net or Java. But that's the end of your options. Can you find an object in a file? Not unless you write your own code for it. Can you query the contents of a file? To quote Michael Palin: "Er, not as such". On the other hand, Ms Access can query the data inside its files. SQL server can, too. These files are meant for internal use by the application (the user or the programmer need not have much knowledge of their existence), and that is the way it should be. So why doesn't Microsoft Word put all of its documents into a single database? Well, you wouldn't be able to copy them to floppy disks, delete them etc. All of these operations would have to be implemented by Word. Or, to put it another way: the infrastructure doesn't support it. Still, the data files are nothing but a rudimentary replacement for databases. And we're storing data in local files because we don't have local databases.
So, what's stopping us from having a small database server as an integral part of the OS? To have structured data dumped inside of it, not scattered all over the hard disk, mixed with various DLL, EXE and other system files? To be able to query it. And query it not just to find "documents containing the word XYZ" but to find "paragraphs containing italicized word XYZ", and find it not only inside Word documents but also Excel, PDF etc files - and doing it from a single query.
It's not hard to imagine, and Microsoft for one seems to be near the right idea: Office documents can now be saved as XML. If we dumped them into an XML database, we'd be able to do most of the above. Think of the possible uses: I could create notes in my Word file and then quote parts of it in a Powerpoint presentation, using references to (instead of copying) the original text so that when the original is updated so is the PPT. I could scribble additional comments inside the PPT, then create a filtered view of it (analogous to a database view) that says "keep the structure but eliminate the scribbled comments". And then set up a replication mechanism (I'd call it e-mail ;)) to have that view's data replicated (sent) to whoever I want to. This data "linking and embedding" idea is also nothing new: it looks like the things OLE always promised.
What's really important to note here is that most of the required technology is really here (or at least near) and most mechanisms are tested in practice, only not implemented everywhere we need them. We could still use OLE to embed data, just change it to store data in an object/XML database instead of a file. We could use remoting (which is still being developed but was proved to work well) to access data in a database. We'd use XQuery to query the data, maybe XSD to describe it. The next thing to do is try and replace the old technologies with new ones. Having a clean and unified environment for developers would mean 1. an enormous boost in productivity.
Friday, August 12, 2005
IXmlSerializable and IObjectReference
One way to serialize singleton or semi-singleton objects into XML. Not very nice, but working:
What I want to do is serialize a reference to a well-known global object, something like the ones derived from System.Type. I don't want the object to be serialized, I just want to transfer it's name to the other side where I'd use the name to find the global instance the reference should point to.
The secret is in using the IObjectReference interface. It can be used to have an object say "this is not me: there I am over there". The interface is used in object deserialization: if a deserializer retrieves an object that implements IObjectReference, it will call the object's GetRealObject method to get a reference the real object. So what we do is this: we add to our "global" object's class a string property called nameUsedForSerialization. It will be used only for serialization purposes (you may have already guessed it). We implement IXmlSerializable so that when the object is serialized, we'll serialize only its name. When it's deserialized, an empty object will be created that contains just the name. And then, when GetRealObject() is called, this name will be used to find the real object. Like this:
What I want to do is serialize a reference to a well-known global object, something like the ones derived from System.Type. I don't want the object to be serialized, I just want to transfer it's name to the other side where I'd use the name to find the global instance the reference should point to.
The secret is in using the IObjectReference interface. It can be used to have an object say "this is not me: there I am over there". The interface is used in object deserialization: if a deserializer retrieves an object that implements IObjectReference, it will call the object's GetRealObject method to get a reference the real object. So what we do is this: we add to our "global" object's class a string property called nameUsedForSerialization. It will be used only for serialization purposes (you may have already guessed it). We implement IXmlSerializable so that when the object is serialized, we'll serialize only its name. When it's deserialized, an empty object will be created that contains just the name. And then, when GetRealObject() is called, this name will be used to find the real object. Like this:
#region IXmlSerializable Members
private string nameUsedForSerialization;
public void WriteXml(System.Xml.XmlWriter writer)
{
string name = this.GetType().ToString() + "." + SomeInstanceName;
writer.WriteString(name);
}
public System.Xml.Schema.XmlSchema GetSchema()
{
return null;
}
public void ReadXml(System.Xml.XmlReader reader)
{
nameUsedForSerialization = reader.ReadString();
}
#endregion
#region IObjectReference Members
public object GetRealObject(StreamingContext context)
{
return MyUtilityClass.FindRealObject(nameUsedForSerialization);
}
#endregion
Saturday, April 30, 2005
Contexts #2
One more iteration on the subject of contexts. But first, let me recap what I said so far: we have a certain type of variable marked as "contextual". When someone tries to read such a variable, the runtime checks to see if the variable's value has been set. If not, it finds the closest contextual parent that has this variable set, and uses it.
There could be two basic types of contextual parents: structural (i.e. the component has a pointer to a contextual parent) or caller (the calling method is the contextual parent). I've stated that we have to declare each contextual variable as a separate type, but it wasn't too clear what the declaration's scope would be. Let's clear this out and modify the philosphy a bit.
I tried to come up with a simple real-life example for contexts, but was unable to find one. While the usage of contexts could be broad (and there are already many concepts that are similar or equal to them - like ambient properties in windows controls, for example), they are mostly applicable in more complex situations. And complex situations don't make good examples.
One interesting application for contexts could be the Visual Studio .Net's design-time functionality. For those who aren't familiar with it, let's say that in Visual Studio's development environment the controls and components get instantiated the same way as in the runtime, but need to - and do - behave somewhat differently. Let's say we are developing a multi-tier application with such functionality in mind: we have a number of windows controls that communicate with the business logic layer, and the business logic communicates with the database-access layer. But, in design-time we want the database layer to behave like a stub component, i.e. do nothing. We signal that we're in design mode by setting to "true" the DesignTime contextual property on the windows form we're working on. (I know this works differently in Visual Studio, but let's pretend we're making an alternative system the way we want it).
Now, a database layer method needs to access this contextual property, and we definitely don't want to carry it around through method arguments. Furthermore, we don't want the value to be static because we want to turn on the database access when we need it (for example, we want to access the database from a list control to be able to automatically generate its columns).
It could work like this: Database layer classes would declare the DesignTime contextual variable as a calling-context type variable. Which means that they would inherit its value from the calling methods. The business logic layer classes would do likewise, and inherit the value from the controls that call them. Now, the controls have a hierarchical structure and their variable should be of the structural-context type. Thusly, a control would inherit the value from its parent controls, which would inherit the value from the form. Graphically, it could look something like this:

Note that the green arrows represent calling-context parent relations and the red represent structural parents. Comp1 and Comp2 are data-layer components.
So it looks like this is the solution: let each component declare its contextual variable and decide whether it will implement it using structural or calling hierarchy. Note that calling-context variables should probably behave like static properties, because they are not bound to objects.
Having all this in mind, we have several questions to answer:
Let's try to answer "yes" to all these questions, including the last (yes, there really is no use for attributes). Like this:
I haven't yet figured out a really elegant way to declare and consume contextual variables. Maybe in a later post...
Ok, now how about a more complex scenario? What happens if components in the data layer have a structural hierarchy, so there's more than one hierarchical path to choose from? Like this:

In this case, is it really good not to be able to dynamically determine whether structural or calling context would be used at one time? I think it probably is. If we need to have the same contextual property work differently at different times, I'm not sure but it may be a sign of bad design. I think it's cleaner to have the context variable's behavior decided statically. It seems simpler anyway.
Some additional notes:
There could be two basic types of contextual parents: structural (i.e. the component has a pointer to a contextual parent) or caller (the calling method is the contextual parent). I've stated that we have to declare each contextual variable as a separate type, but it wasn't too clear what the declaration's scope would be. Let's clear this out and modify the philosphy a bit.
I tried to come up with a simple real-life example for contexts, but was unable to find one. While the usage of contexts could be broad (and there are already many concepts that are similar or equal to them - like ambient properties in windows controls, for example), they are mostly applicable in more complex situations. And complex situations don't make good examples.
One interesting application for contexts could be the Visual Studio .Net's design-time functionality. For those who aren't familiar with it, let's say that in Visual Studio's development environment the controls and components get instantiated the same way as in the runtime, but need to - and do - behave somewhat differently. Let's say we are developing a multi-tier application with such functionality in mind: we have a number of windows controls that communicate with the business logic layer, and the business logic communicates with the database-access layer. But, in design-time we want the database layer to behave like a stub component, i.e. do nothing. We signal that we're in design mode by setting to "true" the DesignTime contextual property on the windows form we're working on. (I know this works differently in Visual Studio, but let's pretend we're making an alternative system the way we want it).
Now, a database layer method needs to access this contextual property, and we definitely don't want to carry it around through method arguments. Furthermore, we don't want the value to be static because we want to turn on the database access when we need it (for example, we want to access the database from a list control to be able to automatically generate its columns).
It could work like this: Database layer classes would declare the DesignTime contextual variable as a calling-context type variable. Which means that they would inherit its value from the calling methods. The business logic layer classes would do likewise, and inherit the value from the controls that call them. Now, the controls have a hierarchical structure and their variable should be of the structural-context type. Thusly, a control would inherit the value from its parent controls, which would inherit the value from the form. Graphically, it could look something like this:

Note that the green arrows represent calling-context parent relations and the red represent structural parents. Comp1 and Comp2 are data-layer components.
So it looks like this is the solution: let each component declare its contextual variable and decide whether it will implement it using structural or calling hierarchy. Note that calling-context variables should probably behave like static properties, because they are not bound to objects.
Having all this in mind, we have several questions to answer:
- Should we declare calling-context variables inside methods? They are obviously not bound to objects, but nevertheless there could exist an option to declare them at class level.
- Could one class support both a structural hierarchy (at class level) and caller hierarchy (in some of its methods)?
- Could a structural context be declared to use a static variable for parent context reference?
- How do we know contextual variables on different classes have the same meaning? Only if their name is the same? Probably a separate contextual type needs to be declared?
- Is there really no use here for .Net attributes, they look so cool?
Let's try to answer "yes" to all these questions, including the last (yes, there really is no use for attributes). Like this:
// declare a contextual variable type
public context bool DesignTime;
// a data layer class supporting this variable in a caller-
// context fashion
public class Comp2
{
protected DesignTime(Context.Caller);
public void AccessDatabase()
{
if(!DesignTime)
{
// access the database...
}
else
{
// just pretend you did
}
}
}
// a class with structural support
public class MyUserControl : UserControl
{
protected Control Parent;
// structural context - the structure is
// traced via the Parent property
protected DesignTime(Context.Structural,
Parent);
}
// a class with mixed caller, structural and
// static support
public class SomeClass
{
protected static object ContextualParent;
protected static DesignTime(Context.Structural,
ContextualParent);
public void SomeMethod()
{
DesignTime DesignTime(Context.Caller);
// ...
}
}
I haven't yet figured out a really elegant way to declare and consume contextual variables. Maybe in a later post...
Ok, now how about a more complex scenario? What happens if components in the data layer have a structural hierarchy, so there's more than one hierarchical path to choose from? Like this:

In this case, is it really good not to be able to dynamically determine whether structural or calling context would be used at one time? I think it probably is. If we need to have the same contextual property work differently at different times, I'm not sure but it may be a sign of bad design. I think it's cleaner to have the context variable's behavior decided statically. It seems simpler anyway.
Some additional notes:
- These two context types (structural and caller) are probably not all there is, so there should be a way to declare one's own logic for them. The philosophy could possibly be borrowed from .Net's delegates (or whatever their equivalent in Java is).
- What about components that don't declare support for the contextual variable? When traversing the hierarchy, they should simply be skipped. If a component acts as a crossroad from structural to calling hierarchy (like the DbControl in the illustration), it has to have the context variable declared. And to bring the clutter to a minimum, AOP should probably be called for help.
- Could there be contextual methods? Possibly, yes, but they would need to behave differently than variables. While reading of a contextual variable triggers the search through hierarchy to the point where the variable's value has been set, accessing a contextual method would probably trigger a search to the point where the method has been implemented. Which probably also means the method would have to be a part of an interface.
Friday, April 15, 2005
Diagnosing .Net debugger problems after Office 2003 installation
I ran into this problem and was unable to find a good solution on the web: When you install Office 2003, the .Net debugger just stops working. Its startup time increases to a minute or two, and if you try to attach to a process for debugging you get a message that goes something like 'The debugger is not properly installed. Run setup to install or repair the debugger.'
There's a hint about this at blogs.msdn.com, but it didn't help in my case. If you have Visual Studio installed, running a repair setup will solve the problem without any fuss. (Of course, if you don't count disruption of your development process as fuss... Trust Microsoft to regularly come up with something that will drive your productivity down and make your nerves thinner). IIRC, after Visual Studio repair you also need to repair Office.
So: if you don't have Visual Studio, a reinstallation of .Net SDK and/or framework won't do the trick. What you have to do is delete the offending files an then repeat setup. I simply deleted the whole C:\Program Files\Common Files\Microsoft Shared\VS7Debug folder. Then I did a search on mscordbi.dll in the windows folder which produced two files (sbs_mscordbi.dll and mscordbi.dll) which I promptly deleted. Then I ran .Net SDK and .Net framework setups again, and that cracked it.
Now, you probably could get away with only deleting the VS7Debug folder, because the MDM.EXE file present inside it is clearly different after the repair installation. Before, it's version number was 11.something (yes, it reeks of Office all over), and afterwards it was 7.10, therefore it's the prime suspect. Also, I believe DbgClr.exe is present only in the SDK package and it could be sufficient to reinstall just the SDK. But, since DbgClr now works for me, I'll leave it to someone else to figure out the rest of the story :).
There's a hint about this at blogs.msdn.com, but it didn't help in my case. If you have Visual Studio installed, running a repair setup will solve the problem without any fuss. (Of course, if you don't count disruption of your development process as fuss... Trust Microsoft to regularly come up with something that will drive your productivity down and make your nerves thinner). IIRC, after Visual Studio repair you also need to repair Office.
So: if you don't have Visual Studio, a reinstallation of .Net SDK and/or framework won't do the trick. What you have to do is delete the offending files an then repeat setup. I simply deleted the whole C:\Program Files\Common Files\Microsoft Shared\VS7Debug folder. Then I did a search on mscordbi.dll in the windows folder which produced two files (sbs_mscordbi.dll and mscordbi.dll) which I promptly deleted. Then I ran .Net SDK and .Net framework setups again, and that cracked it.
Now, you probably could get away with only deleting the VS7Debug folder, because the MDM.EXE file present inside it is clearly different after the repair installation. Before, it's version number was 11.something (yes, it reeks of Office all over), and afterwards it was 7.10, therefore it's the prime suspect. Also, I believe DbgClr.exe is present only in the SDK package and it could be sufficient to reinstall just the SDK. But, since DbgClr now works for me, I'll leave it to someone else to figure out the rest of the story :).
Monday, March 21, 2005
Try-finally: an addition
Um, one addition to the last post: the "create my own transaction if I'm not already under one" code fits into another already mentioned category, the contexts. A transaction is naturally a contextual property, it doesn't need to be passed down object or caller hierarchy. The under_transaction generator (or template, or macro, call it whatever you like :)) should really put its newly created Transaction object into a contextual variable, thus automatically making it available to any code that needs it.
And another thing, before things started sounding simple: an intelligent code generator could be able to detect what's going on in the rest of the code. In the example with the transaction, we had
It would be good if the code generator could be aware of the parts of "some code" that would really be using its transaction. It could then wrap its code more tightly around it.
How could it do this? With refactoring. But not the refactoring we know today - machines would really need simpler refactorings, something we humans wouldn't even call refactorings. Like, tightening a try-finally block. (Yes, I know, this is what optimizers routinely do. Note that I didn't say the code generator should do its own refactoring/optimization :)). But, more on that later.
And another thing, before things started sounding simple: an intelligent code generator could be able to detect what's going on in the rest of the code. In the example with the transaction, we had
under_transaction
{
// some code
}
It would be good if the code generator could be aware of the parts of "some code" that would really be using its transaction. It could then wrap its code more tightly around it.
How could it do this? With refactoring. But not the refactoring we know today - machines would really need simpler refactorings, something we humans wouldn't even call refactorings. Like, tightening a try-finally block. (Yes, I know, this is what optimizers routinely do. Note that I didn't say the code generator should do its own refactoring/optimization :)). But, more on that later.
Example: generating try-finally wraparound code
Here's a nice illustration of the kind of work the before-mentioned code generators could do for us. Consider the following pseudo-C#, where an operation is done under a transaction, and wrapped within a try-finally block:
What we have here is a good example of unavoidable repeated code. Anywhere you want to do several operations under one transaction, you're bound to repeat this. You cannot extract the common code and put it into a method, because your non-common code is in the middle of it. You cannot make two methods (beginning and end) because you cannot split the try-finally block. You're bound to repeat it, over and over again.
Now, the above code is not too big or complicated, and one could live with repeating it. But, we could make it more complicated by having the method create its own transaction if it isn't already under one. Something like this:
Now, this is just a hint of how ugly it can get - and there is no easy way out. The modern programming paradigms (at least the ones I know of) don't give us a solution here.
So, we'll call our fictional code generators to the rescue. The code above would look like:
The code generator for under_transaction could be an absolute simpleton: its task would be simply to replace the given code block's begining and end with its own transaction-management code. (The IDE/compiler would possibly have to mangle some of the variable names to avoid duplicates).
Do we really need a code generator to solve this particular problem? Probably not - we could walk away with changing the way methods are defined (there wouldn't just the methods that you need to call, but new kind of methods that somehow wrap around your code). But this change would be more easily done with a code generator. We could write a "wraparound-inline-method" generator for this occasion. We could invent other ways to do the same thing, write generators and suit our needs any way we want to (invent our own dialect of a programming language? Why not?).
this.Transaction = new Transaction();
bool success = false;
try
{
// do something under Transaction
// ...
success = true;
}
finally
{
if(success)
Transaction.Commit();
else
Transaction.Rollback();
}
What we have here is a good example of unavoidable repeated code. Anywhere you want to do several operations under one transaction, you're bound to repeat this. You cannot extract the common code and put it into a method, because your non-common code is in the middle of it. You cannot make two methods (beginning and end) because you cannot split the try-finally block. You're bound to repeat it, over and over again.
Now, the above code is not too big or complicated, and one could live with repeating it. But, we could make it more complicated by having the method create its own transaction if it isn't already under one. Something like this:
bool usingLocalTransaction = false;
if(this.Transaction == null)
{
this.Transaction = new Transaction();
usingLocalTransaction = true;
}
bool success = false;
try
{
// do something under Transaction
// ...
success = true;
}
finally
{
if(usingLocalTransaction)
{
if(success)
Transaction.Commit();
else
Transaction.Rollback();
}
}
Now, this is just a hint of how ugly it can get - and there is no easy way out. The modern programming paradigms (at least the ones I know of) don't give us a solution here.
So, we'll call our fictional code generators to the rescue. The code above would look like:
under_transaction do
{
// do something under Transaction
// ...
}
The code generator for under_transaction could be an absolute simpleton: its task would be simply to replace the given code block's begining and end with its own transaction-management code. (The IDE/compiler would possibly have to mangle some of the variable names to avoid duplicates).
Do we really need a code generator to solve this particular problem? Probably not - we could walk away with changing the way methods are defined (there wouldn't just the methods that you need to call, but new kind of methods that somehow wrap around your code). But this change would be more easily done with a code generator. We could write a "wraparound-inline-method" generator for this occasion. We could invent other ways to do the same thing, write generators and suit our needs any way we want to (invent our own dialect of a programming language? Why not?).
Thursday, March 17, 2005
The solution
There's one "universal" solution Microsoft has given us and it may well be the only solution for our trouble with them: restart the universe and see if the problem repeats itself.
Friday, March 04, 2005
SP1 for Visual Studio 2002
I must say I never expected it to happen: Microsoft has released service pack 1 for Visual Studio 2002. Why now? What's the point? I thought Visual Studio 2003 was an (admittedly rather expensive) sp1 for visual studio 2002, since it brought us nothing new but bugfixes. So, when can we expect a service pack for Visual Studio 2003, in 2006?
Subscribe to:
Posts (Atom)