Friday, April 4, 2008
Great social networking idea
Despite the fact that my colleagues will probably make fun of me for posting this, but I love the concept. Technobabble 2.0 has created a Vote for the best analyst contest. This is an outstanding use of the social media concept. Because of Technobabbles reach and audience, I think the results should give a pretty good snapshot on how the online community views analysts and could give rise to a new generation of superstars. Along the same lines, I read yesterday that the American Medical Association has been/is working on a standard for creating a doctors rating system. Seems like a pre-emptive strike by the medical field to corral the social network effect that could happen if someone beats them to the punch and deploys a killer doctor rating web app. I know that there currently several of those types of systems that exist, but none of them has reached critical mass yet to have an true impact. It will allow them to set the terms for ratings, establish a standard scoring scale, and possibly prevent disinformation being spread about doctors. I am on the fence for this one. I don't really have a problem with online ratings for lawyers, coders, analysts, companies... ect, but I do have a concern about opening up the medical profession to public scrutiny. Here is the reason why: doctors at higher end medical facilities would be unfairly exposed to more risks online, due to the demographic that they serve. It could skew the results. On the flip side of thing, fantastic surgeons and doctors operating out of "St. Elsewhere" would not be getting the recognition they deserve. If the AMA puts a system in place and all hospitals are forced to give ratings and reviews at the end of every treatment session, we will see a more clear picture on the effectiveness on individual doctors.
Wednesday, April 2, 2008
Analysts Gone Mad!!!!
Big news in the analyst space this week. It looks like there are some rumblings on Silver Lake Partners selling their holdings in Gartner to IDG. This is not very interesting when you just take a look at the article, but it becomes intriguing when you look are some other news that is swirling around this space. For instance, Gartner's share price has been behaving erratically over the last 6 months, dropping from a high of 28 down to 15, and then immediately spiking back up to 20. All of this movement was generated on no specific news. The second point is that that has been some anecdotal evidence that Gartner has not been very helpful in dealing with existing customer issues & posting new content to the web. They were supposed to have launched a stronger blog/social network initiative, but it looks like it has stalled. Furthermore, Forrester is continuing to push into their space with more effectiveness, and a louder voice. As I am a betting man, I say there is a .20 probability of a buyout of some type taking place in the next year.
Also in the analyst news, hello / goodbye for David Linthicum formerly of the Linthicum Group, formerly of ZapThink, now CEO of Strikeiron. When the Linthicum Group was bought out 6 months ago by ZapThink, I was very excited to see that such a well known industry expert was coming on board to my favorite SOA analysis site. However in reading David's post since the merger, they had begun to take on a far more repetitive / negative connotation. You could actually see the frustration that he was feeling coming through the screen. The short stint at ZapThink bugs me a bit, but only because I had such high expectations of the merger. As expected, both sides are saying the right things about the time spent, but from the outside it just looks like the differing views on SOA were incompatible.
Also in the analyst news, hello / goodbye for David Linthicum formerly of the Linthicum Group, formerly of ZapThink, now CEO of Strikeiron. When the Linthicum Group was bought out 6 months ago by ZapThink, I was very excited to see that such a well known industry expert was coming on board to my favorite SOA analysis site. However in reading David's post since the merger, they had begun to take on a far more repetitive / negative connotation. You could actually see the frustration that he was feeling coming through the screen. The short stint at ZapThink bugs me a bit, but only because I had such high expectations of the merger. As expected, both sides are saying the right things about the time spent, but from the outside it just looks like the differing views on SOA were incompatible.
Monday, March 31, 2008
Microsoft Introduces Skyscrapr
Microsoft has launched a new Architect portal called Skyscrapr. It seems to be a convergence portal for all things architecturally related that Microsoft produces. Additionally, I noticed a HUGE uptick in blog postings to the ASDN Architecture Journal over the last couple of days. I didn't check my aggregator for a couple of days, and I had over eighty items waiting for me to read. Here is the RSS Feed:
http://www.microsoft.com/feeds/msdn/en-us/architecture/rss/rssMSDNHomePgArc.xml.
A couple of articles that I noticed that were interesting were:
Pragmatic Architecture : Data Access
The Optimistically Critical Architect
Software Abstraction Layer
The Role of the Software Architect : Caring and Communicating
http://www.microsoft.com/feeds/msdn/en-us/architecture/rss/rssMSDNHomePgArc.xml.
A couple of articles that I noticed that were interesting were:
Pragmatic Architecture : Data Access
The Optimistically Critical Architect
Software Abstraction Layer
The Role of the Software Architect : Caring and Communicating
Thursday, February 14, 2008
Yet More Development Angst
As is obvious by my previous post, I'm completely disenchanted with the preachings of the patron saints of technology and their consistent gear switching on "best practice", "best language", "best framework".
Best practice is just that, what works well in practice, not theory. And theory is the domain in which they dwell. I get tired, as much as most of you do I'm sure, of beating the "best practice" drum only to have "best practice" thrown to the wayside for the "just get it done" mentality that EVERY corporation, when put to it, adheres to.
"Best language" and "best frameworks" are both very subjective terms and I realize this. I do, however, expect prophets of a particular "best **/*" to stick to one or maybe two for a few years before they declare the "next" best language or framework.
Just as these same cats a few years ago were mocking client side work and declaring that everything should be done on the server side, they've now switched gears again and declared it should be done on the client with minimal calls to the server. There will be yet another switchback in the next few years as more and more client side code becomes unmaintainable.
All of the switching and technological "discovery" is perfectly fine and good, but if you ask yourself the simple question "Has development gotten easier with the advent of all these new technologies" you'll find the answer is no - emphatically. Not only is it no, it's an order of magnitude more difficult to properly design, implement, and pay for an enterprise level application than it was just 10 years ago...
Users not only expect but demand the same type of functionality and responsiveness that they had 10 years ago when VB desktop apps ruled the world. And here we are, still mired in the same transport level muck we started out in...
Best practice is just that, what works well in practice, not theory. And theory is the domain in which they dwell. I get tired, as much as most of you do I'm sure, of beating the "best practice" drum only to have "best practice" thrown to the wayside for the "just get it done" mentality that EVERY corporation, when put to it, adheres to.
"Best language" and "best frameworks" are both very subjective terms and I realize this. I do, however, expect prophets of a particular "best **/*" to stick to one or maybe two for a few years before they declare the "next" best language or framework.
Just as these same cats a few years ago were mocking client side work and declaring that everything should be done on the server side, they've now switched gears again and declared it should be done on the client with minimal calls to the server. There will be yet another switchback in the next few years as more and more client side code becomes unmaintainable.
All of the switching and technological "discovery" is perfectly fine and good, but if you ask yourself the simple question "Has development gotten easier with the advent of all these new technologies" you'll find the answer is no - emphatically. Not only is it no, it's an order of magnitude more difficult to properly design, implement, and pay for an enterprise level application than it was just 10 years ago...
Users not only expect but demand the same type of functionality and responsiveness that they had 10 years ago when VB desktop apps ruled the world. And here we are, still mired in the same transport level muck we started out in...
Ruby Can Save The World!!!
(This is an excerpt from an email, but I felt that rant wasn't in a public enough forum)...
*** Starting Rant ***
"Why do we continue to re-invent the (broken) wheel?"
The talking heads continue to keep themselves popular on the conference circuit by declaring "new" languages sexy. Where was Martin Fowler when I was playing around with Ruby in the late 90's? Declaring Java the next sexy language.
Do I like Ruby / Groovy / ${insert cool language here} ?
Sure, they're all cool.
I like Lua myself, and have had fun the perl, python, smalltalk, ADA, and a score of other languages in the past. But how about we stop trying to fix the symptoms of our programmatic issues by creating new languages and instead focus on the REAL problems behind them.
Why does web development suck compared to the days of desktop development? First and foremost is the transport, and not far behind is the statelessness. Statelessness has been solved somewhat, but why is it that web development has been going on solid since the 90's and we still have to be concerned with this paradigm of request-response? Why hasn't this communication layer been abstracted away so that development can focus on the important aspects of the project.
JSF and other overly complicated frameworks may help with that issue but introduce plenty of problems of their own. We're absolutely destroying the KISS principle every time we adopt the next flashy framework or language that promises easy set up and next to nothing maintenance - these always have a cost and rarely fix the problem.
How do we fix this? Not with a language or a framework, but a tool. A new browser. One that is actually INTENDED to serve applications and be more than a terminal (and no, Flex, OpenLazlo, etc. won't suffice - those are shoehorn patches). At least that would be a great start and the best first step. Then follow that up with some other improvements to ease data access, configuration, etc...
*** Rant Wind Down ***
Above all I'm beginning to tire of the declarations, musings, and "The Thinker" posings of the Fowler's, Eckel's, and Cockburn's of the world. Look closely (or not so closely) and it's apparent they have their own agendas. They are paid extremely well to pontificate and then talk about their pontifications on the circuit. They are paid just as well the next year on said circuit when they refute what they had spewed just 12 months before...
Not that these guys don't have valuable insight and a vast amount of experience, but they make mistakes - consistently. That's why every other year they are touting the next greatest technology... A good example was the panel of last year's NFJS downright berating Struts - a framework they were in love with only a year or two before.
So my advice is take what they preach, decide on your own if it makes sense, and then either disregard it or tuck it away for use later. Some of the ideas bandied about simply aren't practical (pair-programming being one of them) while others are just goofy (writing a test for a class that doesn't exist just to watch the test (amazingly) fail).
*** End Rant ***
I apologize for this rant but I'm mired in writing documentation and I have reached my boiling point and have had my fill of Magic Cure All Languages (MCAL for short)...
Also, feel free to take my advice above and disregard any and / or all of my musings :)
*** Starting Rant ***
"Why do we continue to re-invent the (broken) wheel?"
The talking heads continue to keep themselves popular on the conference circuit by declaring "new" languages sexy. Where was Martin Fowler when I was playing around with Ruby in the late 90's? Declaring Java the next sexy language.
Do I like Ruby / Groovy / ${insert cool language here} ?
Sure, they're all cool.
I like Lua myself, and have had fun the perl, python, smalltalk, ADA, and a score of other languages in the past. But how about we stop trying to fix the symptoms of our programmatic issues by creating new languages and instead focus on the REAL problems behind them.
Why does web development suck compared to the days of desktop development? First and foremost is the transport, and not far behind is the statelessness. Statelessness has been solved somewhat, but why is it that web development has been going on solid since the 90's and we still have to be concerned with this paradigm of request-response? Why hasn't this communication layer been abstracted away so that development can focus on the important aspects of the project.
JSF and other overly complicated frameworks may help with that issue but introduce plenty of problems of their own. We're absolutely destroying the KISS principle every time we adopt the next flashy framework or language that promises easy set up and next to nothing maintenance - these always have a cost and rarely fix the problem.
How do we fix this? Not with a language or a framework, but a tool. A new browser. One that is actually INTENDED to serve applications and be more than a terminal (and no, Flex, OpenLazlo, etc. won't suffice - those are shoehorn patches). At least that would be a great start and the best first step. Then follow that up with some other improvements to ease data access, configuration, etc...
*** Rant Wind Down ***
Above all I'm beginning to tire of the declarations, musings, and "The Thinker" posings of the Fowler's, Eckel's, and Cockburn's of the world. Look closely (or not so closely) and it's apparent they have their own agendas. They are paid extremely well to pontificate and then talk about their pontifications on the circuit. They are paid just as well the next year on said circuit when they refute what they had spewed just 12 months before...
Not that these guys don't have valuable insight and a vast amount of experience, but they make mistakes - consistently. That's why every other year they are touting the next greatest technology... A good example was the panel of last year's NFJS downright berating Struts - a framework they were in love with only a year or two before.
So my advice is take what they preach, decide on your own if it makes sense, and then either disregard it or tuck it away for use later. Some of the ideas bandied about simply aren't practical (pair-programming being one of them) while others are just goofy (writing a test for a class that doesn't exist just to watch the test (amazingly) fail).
*** End Rant ***
I apologize for this rant but I'm mired in writing documentation and I have reached my boiling point and have had my fill of Magic Cure All Languages (MCAL for short)...
Also, feel free to take my advice above and disregard any and / or all of my musings :)
Maven 2 Remote Repositories - Part II
It appears that archiva doesn't work right out of the box - at least not for it's current version. After downloading and building the project it was still throwing configuration exceptions and wouldn't deploy. So I searched around jira and found a fix for the bug. After following the prescribed steps and creating my own basic archiva.xml in my .m2 directory it worked, at least the test did...
When I continued on to deploying the standalone version to my destination server there was another issue - a NamingException. Turns out someone checked in the plexus.xml config that duplicated a datasource. I just had to go to the conf/plexus.xml file and fix it... I crossed my fingers, closed my eyes, and ran the run.sh script...
It worked!
Now for configuration...
Follow the directions to set up your managed repositories and the repositories that they proxy. Pretty straightforward and works out of the box. The tricky part is setting up your settings.xml
It appears that at this time just setting up mirrors doesn't work unto itself. Mirroring works for any non-plugin repositories. However, for each plugin repository you will need to set up pluginRepository elements in a profile. This is clunky and will hopefully get worked out as the product matures.
The last tidbit that took me a while to figure out is this: Any connection to the managed archiva repository is expected to be secure - meaning it wants a userid and password. This was not abundantly clear in the documentation... You need to set up a server entry in your settings.xml for each mirror / pluginRepository that you you plan on proxying. The userid and password are those that are defined in archiva. I simply defined a maven-user user with no password and assigned to it the role of Repository Observer.
Once you have these set up you are good to go!
When I continued on to deploying the standalone version to my destination server there was another issue - a NamingException. Turns out someone checked in the plexus.xml config that duplicated a datasource. I just had to go to the conf/plexus.xml file and fix it... I crossed my fingers, closed my eyes, and ran the run.sh script...
It worked!
Now for configuration...
Follow the directions to set up your managed repositories and the repositories that they proxy. Pretty straightforward and works out of the box. The tricky part is setting up your settings.xml
It appears that at this time just setting up mirrors doesn't work unto itself. Mirroring works for any non-plugin repositories. However, for each plugin repository you will need to set up pluginRepository elements in a profile. This is clunky and will hopefully get worked out as the product matures.
The last tidbit that took me a while to figure out is this: Any connection to the managed archiva repository is expected to be secure - meaning it wants a userid and password. This was not abundantly clear in the documentation... You need to set up a server entry in your settings.xml for each mirror / pluginRepository that you you plan on proxying. The userid and password are those that are defined in archiva. I simply defined a maven-user user with no password and assigned to it the role of Repository Observer.
Once you have these set up you are good to go!
Maven 2 Remote Repositories
In Maven 1.x the repositories were simple - there wasn't a difference between a local repository and a remote repository. The layouts were the same and there wasn't additional information in one that wasn't contained in the other. The only variant was where the repository was located.
In Maven 2.x that all changed. With the addition of transitive dependencies everything got a little more complicated. I will attempt to explain...
A remote repository, and local for that matter, contain a few more files. The obligatory jars are still there, as are deployed poms. The additional files come in the way of metadata and their checksums.
Each artifact has at it's root level (i.e. not by version) a maven-metadata.xml file (on the server) or multiple maven-${serverId}-metadata.xml files that contain all the releases of the artifact, as well as the latest and released versions and it's deployed timestamp (on the remote) or it's downloaded timestamp (on the local).
These files are used for a couple of things. The first is to allow maven to check for updates based on time. If you have repositories in your settings.xml or POM that allow updates (daily for example) Maven will check these timestamps and compare local versus remote to determine if a download is required. The second use is that of when a dependency is declared without a version. Maven will first check the local repository and it's metadata to determine what the latest version of the artifact is and download if necessary.
This poses a small problem when trying to create an enterprise remote repository that doesn't allow access to the internet at large. These metadata files need to be mantained by hand (or by an automated process) outside of the realm of Maven's dependency management.
Why can't you copy a local repository to the remote? You can, but it won't work for these dynamic version checks. The problem is that the metadata files are renamed to that of the server id from where a particular version was downloaded. There can be several, depending on the artifact, so you can't just rename the file back to what Maven is expecting to find.
I'm checking into a couple of options. The first I've implemented as a stopgap - a basic wget script that can download the artifact's complete directory structure. It works, but it's clunky and doesn't automatically handle transtitive dependency downloads. The second tool I'm going to testdrive is Archiva
Check back to see the results...
Technorati Tags: maven, build management
In Maven 2.x that all changed. With the addition of transitive dependencies everything got a little more complicated. I will attempt to explain...
A remote repository, and local for that matter, contain a few more files. The obligatory jars are still there, as are deployed poms. The additional files come in the way of metadata and their checksums.
Each artifact has at it's root level (i.e. not by version) a maven-metadata.xml file (on the server) or multiple maven-${serverId}-metadata.xml files that contain all the releases of the artifact, as well as the latest and released versions and it's deployed timestamp (on the remote) or it's downloaded timestamp (on the local).
These files are used for a couple of things. The first is to allow maven to check for updates based on time. If you have repositories in your settings.xml or POM that allow updates (daily for example) Maven will check these timestamps and compare local versus remote to determine if a download is required. The second use is that of when a dependency is declared without a version. Maven will first check the local repository and it's metadata to determine what the latest version of the artifact is and download if necessary.
This poses a small problem when trying to create an enterprise remote repository that doesn't allow access to the internet at large. These metadata files need to be mantained by hand (or by an automated process) outside of the realm of Maven's dependency management.
Why can't you copy a local repository to the remote? You can, but it won't work for these dynamic version checks. The problem is that the metadata files are renamed to that of the server id from where a particular version was downloaded. There can be several, depending on the artifact, so you can't just rename the file back to what Maven is expecting to find.
I'm checking into a couple of options. The first I've implemented as a stopgap - a basic wget script that can download the artifact's complete directory structure. It works, but it's clunky and doesn't automatically handle transtitive dependency downloads. The second tool I'm going to testdrive is Archiva
Check back to see the results...
Technorati Tags: maven, build management
Subscribe to:
Posts (Atom)