"…mais ce serait peut-être l'une des plus grandes opportunités manquées de notre époque si le logiciel libre ne libérait rien d'autre que du code…"

Archive for the ‘Architecture logicielle’ Category

Voir http://fr.wikipedia.org/wiki/Architecture_Logicielle
– http://fr.wikipedia.org/wiki/Cat%C3%A9gorie:D%C3%A9veloppement_logiciel

[Grenoble] Soirée Spéciale avec Bruce Eckel : Scala as a first programming language » le mercredi 12 octobre 2011

Posted by Noam sur octobre 7, 2011

Source:: http://www.jugevents.org/jugevents/event/41752

Est-il besoin de présenter Bruce Eckel ?
Sans doute avez vous déjà vus son nom sur les bestsellers: Thinking in Java ou Thinking in C++.

Nous profitons de sa présence à la conférence internationale ICALEPCS à l’ESRF pour l’inviter au Java User Group.

Bruce Eckel : Scala as a first programming language

Bruce Eckel : Scala as a first programming language

Autres liens

Comptes rendus:

Posted in 2011, Années, Architecture logicielle, C_sharp, design pattern, DotNet, Grenoble, java, Langages | Tagué: , , | Leave a Comment »

Twisted : le moteur réseau événementiel écrit en Python + scapy l’outil python très puissant d’analyse de paquets + kamaelia le framework python développé par la BBC

Posted by Noam sur janvier 21, 2008

Dans Linux-Fr, on parle du passage de NuLog de PHP à Python (utilisation de Twisted). C’est donc une bonne occasion de parler de Twisted.

http://linuxfr.org/2008/01/18/23582.html (« Sécurité : Nulog 2 est disponible Posté par _gryzor_ (display_envoyermessageperso(‘_gryzor_’);). Modéré le vendredi 18 janvier. isadmin(‘23582’)

Voici la 2ème génération de l’incontournable outil d’analyse de fichiers journaux de pare-feu. Nulog2, presque 6 ans après la v1, s’appuie toujours sur un format de journalisation SQL, mais présente les informations de manière plus synthétique, et beaucoup plus exploitable.
Au menu des nouveautés :

  • Réécriture complète du code, passage de PHP à Python avec Twisted Matrix ;
  • Génération à la volée de diagrammes et de camemberts, au souhait de l’utilisateur ;
  • Personnalisation totale de la page d’accueil, pour chaque utilisateur ;
  • Refonte de l’ergonomie de l’outil et de la manipulation des critères d’affichage des connexions réseaux ;
  • Possibilités de recherches beaucoup plus avancées ;
  • Export des données affichées en CSV pour traitement personnalisé ;
  • Passage à la licence GPLv3.

Bien entendu, Nulog2, permet, comme la v1, d’exploiter des logs authentifiés par un pare-feu NuFW. Il est donc très simple de créer ses propres indicateurs à placer sur sa page d’accueil, par exemple « Histogramme des derniers paquets droppés des trois dernières heures de l’IP 10.56.140.47 ou de l’utilisateur Martin ».

Voir:

http://en.wikipedia.org/wiki/Twisted_%28software%29 (« Twisted is an event-driven network programming framework written in Python and licensed under the MIT License. Twisted projects variously support TCP, UDP, SSL/TLS, IP Multicast, Unix domain sockets, a large number of protocols (including HTTP, NNTP, IMAP, SSH, IRC, FTP, and others), and much more. Twisted is based on the event-driven programming paradigm, which means that users of Twisted write short callbacks which are called by the framework...Deferreds Central to the Twisted application model is the concept of a deferred (elsewhere called a future). A deferred is a value which has not been computed yet, for example because it needs data from a remote peer. Deferreds can be passed around, just like regular objects, but cannot be asked for their value. Each deferred supports a callback chain. When the deferred gets the value, it is transferred through the callback chain, with the result of each callback being the input for the next one. This allows operating on the values of a deferred without knowing what they are…« )

Twisted

http://twistedmatrix.com/trac/wiki/Downloads  (« Source Checkout .To checkout code from the repository, use: 

svn co svn://svn.twistedmatrix.com/svn/Twisted/trunk  Twisted« )

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/84317 (« Easy threading with Futures. Although Python’s thread syntax is nicer than in many languages, it can still be a pain if all one wants to do is run a time-consuming function in a separate thread, while allowing the main thread to continue uninterrupted. A Future provides a legible and intuitive way to achieve such an end. »)

http://linuxfr.org/2008/01/18/23582.html (« Explications du passage de PHP à Python par Eric Leblond. La question est mieux posée dans le sujet que dans le corps du message. Il y a en effet deux choses séparées : 1) le passage d’un langage à un framework et 2) le passage de PHP à Python
Le choix 1 est simple : J’ai commencé à travaillé sur nulog 1) (connu comme ulog-php à l’époque) aux alentours de 2001/2002 . La notion de framework n’était pas encore bien implantée (voir même n’existait pas). Début 2007, nulog commençait à devenir difficile à faire évoluer et nous avons donc décidé de lancer un projet de réécriture au sein d’INL (dont je fais parti). Le projet Nulog 2 a ainsi été initié avec dès le départ la décision d’utiliser un framework et une architecture MVC. Le choix 2) s’explique par plusieurs points: Grandes qualités du framework Twisted, notamment capacité à offrir des vues dans des protocoles variés (SOAP, XML-RPC, IRC, HTTP). Présence de bons développeurs Python à INL, développeurs capable d’épauler Romain Bignon, développeur principal de Nulog 2. Langage PHP trop laxiste et surtout lié au web alors que l’on souhaitait ne pas se limiter à ce media. L’ensemble de ces raisons nous ont fait abandonner PHP pour passer à Python/Twisted
. »)

– le numéro 100 de Linux Magazine p.72 « Utilisez Twisted, un moteur réseau événementiel écrit en Python et sous licence MIT » écrit par Sylvain de Tilly.

http://nufw.org/ (« NuFW ajoute la notion d’utilisateurs aux règles de filtrage. Le projet s’appuie sur Netfilter, la couche pare-feu du noyau Linux et est disponible sous licence GPLv3. »)

http://software.inl.fr/trac/wiki/EdenWall/NuLog2 (« Nulog2 is a complete rewrite of Nulog the historical filtering log analysis solution from INL. Nulog2 is an application build upon Twisted, an advanced Python framework…You can also directly checkout subversion source:

svn co http://software.inl.fr/svn/mirror/edenwall/nulog2/trunk/ nulog2

Nulog2’s Source are available for browsing… »)

http://inl.fr/ (« INL propose des solutions Logiciels Libres aux entreprises et administrations qui cherchent des services et produits fiables et sécurisés. Nous fournissons des produits intégrés de haut niveau, autour de solutions bien connues, comme Spamassassin, Postfix, Apache, etc, auxquelles nous contribuons. Nous menons également des développements innovants, à la pointe des technologies actuelles au bénéfice de la communauté du Logiciel Libre. »)

http://wiki.python.org/moin/WebServers (« TwistedMatrix includes a very scalable web server written in Python. »)

http://wiki.python.org/moin/TwistedMatrix (« *NOT* just framework for WebProgramming. Includes a scalable and safe web server that outperforms apache in terms of security and scalability« )

Sujet lié à la sécurité et au réseau: scapy

http://linuxfr.org/2007/10/27/23264.html (« Scapy est un utilitaire Open Source en Python développé par Philippe Biondi, cet outil vous permet de disséquer, de forger, de recevoir et d’émettre des paquets (et des trames) de données pour un grand nombre de protocoles que vous pourrez également décoder : DNS, DHCP, ARP, SNMP, ICMP, IP, TCP, UDP. L’un des seuls points faibles connu à ce jour, concernant Scapy, est son manque de documentation officielle ou non, notamment francophone, permettant de le destiner à un plus large public que les experts du domaine ; partant de ce constat, le site Secuobs.com, spécialisé dans le domaine de la sécurité informatique, met à disposition de tous un document venant combler une partie de ce manque. Vous y apprendrez notamment comment installer et configurer Scapy ainsi que les rêgles rudimentaires relatives à son utilisation (commandes basiques et avancées) et à la manipulation de paquets (et de trames) de données dont un exemple de génération de graphiques 2D/3D à partir des résultats d’un traceroute réalisé à l’aide de Scapy.

« )

http://trac.secdev.org/scapydoc-com/wiki/FRscapydoc (« D’après la documentation officielle (man scapy), Scapy est un puissant programme interactif de manipulation de paquets. Il peut forger ou décoder les paquets d’un grand nombre de protocoles, les émettre, les capturer, faire correspondre des requêtes et des réponses et bien plus encore. Il permet la manipulation de la plupart des outils de scan, traceroute, de sonde, de tests unitaires, d’attaques ou de découverte de réseau (il remplace facilement hping, 85% de nmap, arpspoof, arp-sk, arping, tcpdump, tethereal, p0f, etc.). Il se comporte également très bien sur un grand nombre de tâches qu’un grand nombre de programme n’est pas en mesure de manipuler, comme envoyer des trames invalides, injecter vos propres trames 802.11, combiner des techniques (VLAN hopping+ARP cache poisoning, VOIP decoding sur canal chiffré en WEP…), etc.Scapy est écrit en python, parce que le python, c’est bien. Personne n’a besoin de comparer Python à Ruby. Par contre, tout le monde se sent obligé de comparer Ruby à Python. À vous d’en tirer les conclusions.) »)

http://trac.secdev.org/scapydoc-com/wiki/FrInstall (« Scapy est un logiciel écrit en langage Python qui se rapproche de la famille des langages de script. Pour pouvoir utiliser Scapy, il suffit donc de posséder un interpréteur de commande Python.

Commencez par télécharger le script avec l’une des méthodes suivantes :

  • La dernière version issue du dépôt (recommandé, mis à jour régulièrement)
  • Le tarball, version 1.1.1 (peu mis à jour)

Il vous suffit de copier le script dans le répertoire de votre librairie python actuelle :

wget http://hg.secdev.org/scapy/raw-file/tip/scapy.py
cp scapy.py /var/lib/python-support/python2.4/

Il peut être intéressant de créer un lanceur directement dans le répertoire d’exécution des binaires de l’utilisateur :

$ cat /usr/bin/scapy
#!/bin/sh
exec /usr/bin/python /var/lib/python-support/`pyversions -d`/scapy.py

Si vous possédez une distribution GNU/Linux :

aptitude install scapy
urpmi scapy

Il existe également un portage pour l’OS privatif de Microsoft :

Vous avez maintenant les outils nécessaires, passons à la mise en pratique« )

– pour information, Philippe Biondy prépare un numéro spécial sur Python pour Linux Magazine (source http://lists.afpy.org/mailman/listinfo/afpy-membres)

======================

Kamaelia

–  http://en.wikipedia.org/wiki/Kamaelia  (« Kamaelia is a free software/open source Python-based systems-development tool and concurrency framework produced by BBC Research. Kamaelia applications are produced by linking independent components together. These components communicate entirely through « inboxes » and « outboxes » (queues) largely removing the burdens of thread-safety and IPC from the developer. This also makes components reusable in different systems, allows easy unit testing and results in parallelism (between components) by default. Components are generally implemented as generators – a method more light-weight than allocating a thread to each (though this is also supported). As a result, switching between the execution of components in Kamaelia systems is very fast. Applications that have been produced using Kamaelia include a Freeview digital video recorder, a network-shared whiteboard, a 3D GUI, an HTTP Server, an audio mixer, a stream multicasting system and a simple BitTorrent client. »)

http://kamaelia.sourceforge.net/Introduction (« A key aim of Kamaelia is to enable even novice programmers to create scalable and safe concurrent systems, quickly and easily. Lego/K’nex for programmers. For people. For building things. It’s about making concurrency on systems easier to use, so easy you forget that you’re using it. It’s been done once before, spectacularly well, so well many people forget it’s there, a key example – unix pipelines. However it’s been done in hardware since day 1, since that’s how hardware works. One day, I sat back and realised that network systems looked almost identical in nature to the asynchronous hardware systems, conceptually, with one major exception. In hardware, you don’t know who your buffers are connected to via wires. You have a protocol for getting that information over (be it a clock, or handshake circuits) but no other knowledge. Kamaelia was borne, technology wise, from the idea « what if we developed software like hardware » – each component with no direct knowledge of any other. Similar to programs in a unix pipeline. This is proving to be a very useful approach. Kamaelia is the result. Kamaelia is divided into 2 sections:

  • Axon – this is a framework, developed by application spikes, for wrapping active objects. Specifically these are generators (mainly) and threads. The resulting library at it’s core is pretty simple – a novice programmer can learn python one week and implement their own version in about a week.
  • Kamaelia – this is the toy box – a library of components you can take and bolt together, and customise. This includes components for TCP/multicast clients and servers, backplanes, chassis, Dirac video encoding & decoding, Vorbis decoding, pygame & Tk based user interfaces and Tk, visualisation tools, presentation tools, games tools…

The reason for concurrency here isn’t because we’re after performance, but due to the problems we’re facing are naturally
concurrent – millions of people watching content
. Therefore, the aim is to make dealing with this concurrency simple/easy, or natural/fun. Hence the lego/K’nex analogy
.
« )

http://kamaelia.sourceforge.net/Repository (« Checking out a working copy

From the command line:

You can check out the whole repository, but be warned, we use alot of branches – so your initial checkout may be rather large! You can also view the repository contents from a web browser here. There are more details about the subversion service on sourceforge.net. Keeping abreast of check-ins:
Keep tabs on the check-ins we make by subscribing to the kamaelia-commits mailing list.
« )

http://yeoldeclue.com/cgi-bin/blog/blog.cgi (« un blog très original »)

http://darkness.codefu.org/wordpress/ (« Kamaelia looks interesting. The system of “wiring” components together feels right to me; I was first exposed to this in NesC. However, the implementation needs to be updated to support the new features of generators in Python 2.5, as the current syntax strikes me as rather ugly. In fact, it looks like Kamaelia needs a recent release, period: the last one I saw was from 2006… (As a side note: everything should be easy_installable. Kamaelia and Twisted are not, though Twisted has ongoing work to this end ».)

http://yeoldeclue.com/cgi-bin/blog/blog.cgi?rm=viewpost&nodeid=1200187974 (« Interesting wishlist for kamaelia (and others, but I’m interested from a kamaelia perspective) here: http://darkness.codefu.org/wordpress/2007/12/26/295
I’m not sure I buy all the criticisms, and feel they’re more a wish-list in terms of improvements specific points mentioned about kamaelia as (IMO) potential wishlist items:
« ) »)

Posted in 2008, Internet, MVC, network applications, python, Sécurité informatique | Tagué: , , | Leave a Comment »

Les nouveautés dans ASP.NET Extensions: ASP.NET MVC et ASP.NET AJAX

Posted by Noam sur décembre 15, 2007

Comme le précise ce billet http://dosimple.ch/articles/MVC-ASP.NET/ écrit le 2 mai 2006 et celui-ci écrit le 13 mars 2007, le framework ASP.NET n’incite pas particulièrement à la séparation stricte de type Modèle, Vue, Contrôleur. On a vu que Monorail, projet open source, implémente le modèle MVC. Microsoft emboite le pas avec un grand retard en introduisant ASP.NET MVC: voir le billet suivant: http://weblogs.asp.net/scottgu/archive/2007/10/14/asp-net-mvc-framework.aspx (« One of the things that many people have asked for over the years with ASP.NET is built-in support for developing web applications using a model-view-controller (MVC) based architecture. Last weekend at the Alt.NET conference in Austin I gave the first public demonstration of a new ASP.NET MVC framework that my team has been working on. You can watch a video of my presentation about it on Scott Hanselman’s blog here.

What is a Model View Controller (MVC) Framework?

MVC is a framework methodology that divides an application’s implementation into three component roles: models, views, and controllers.

  • « Models » in a MVC based application are the components of the application that are responsible for maintaining state. Often this state is persisted inside a database (for example: we might have a Product class that is used to represent order data from the Products table inside SQL).

  • « Views » in a MVC based application are the components responsible for displaying the application’s user interface. Typically this UI is created off of the model data (for example: we might create an Product « Edit » view that surfaces textboxes, dropdowns and checkboxes based on the current state of a Product object).

  • « Controllers » in a MVC based application are the components responsible for handling end user interaction, manipulating the model, and ultimately choosing a view to render to display UI. In a MVC application the view is only about displaying information – it is the controller that handles and responds to user input and interaction.

One of the benefits of using a MVC methodology is that it helps enforce a clean separation of concerns between the models, views and controllers within an application. Maintaining a clean separation of concerns makes the testing of applications much easier, since the contract between different application components are more clearly defined and articulated.

The MVC pattern can also help enable red/green test driven development (TDD) – where you implement automated unit tests, which define and verify the requirements of new code, first before you actually write the code itself.

A few quick details about the ASP.NET MVC Framework

I’ll be doing some in-depth tutorial posts about the new ASP.NET MVC framework in a few weeks once the bits are available for download (in the meantime the best way to learn more is to watch the video of my Alt.net presentation).

A few quick details to share in the meantime about the ASP.NET MVC framework:

  • It enables clean separation of concerns, testability, and TDD by default. All core contracts within the MVC framework are interface based and easily mockable (it includes interface based IHttpRequest/IHttpResponse intrinsics). You can unit test the application without having to run the Controllers within an ASP.NET process (making unit testing fast). You can use any unit testing framework you want to-do this testing (including NUnit, MBUnit, MS Test, etc).

  • It is highly extensible and pluggable. Everything in the MVC framework is designed so that it can be easily replaced/customized (for example: you can optionally plug-in your own view engine, routing policy, parameter serialization, etc). It also supports using existing dependency injection and IOC container models (Windsor, Spring.Net, NHibernate, etc).

  • It includes a very powerful URL mapping component that enables you to build applications with clean URLs. URLs do not need to have extensions within them, and are designed to easily support SEO and REST-friendly naming patterns. For example, I could easily map the /products/edit/4 URL to the « Edit » action of the ProductsController class in my project above, or map the /Blogs/scottgu/10-10-2007/SomeTopic/ URL to a « DisplayPost » action of a BlogEngineController class.

  • The MVC framework supports using the existing ASP.NET .ASPX, .ASCX, and .Master markup files as « view templates » (meaning you can easily use existing ASP.NET features like nested master pages, <%= %> snippets, declarative server controls, templates, data-binding, localization, etc). It does not, however, use the existing post-back model for interactions back to the server. Instead, you’ll route all end-user interactions to a Controller class instead – which helps ensure clean separation of concerns and testability (it also means no viewstate or page lifecycle with MVC based views).

  • The ASP.NET MVC framework fully supports existing ASP.NET features like forms/windows authentication, URL authorization, membership/roles, output and data caching, session/profile state management, health monitoring, configuration system, the provider architecture, etc. »)

A voir:

  • https://pvergain.wordpress.com/2007/03/13/critique-de-larchitecture-aspnet/ (« Le problème avec ASP. Net est qu’il n’y a qu’un objet qui traite les demandes http, c’est l’objet PAGE. C’est lui qui a le contrôle de tout, et donc il mélange le code dit de «contrôle», et le code qui pilote la «visualisation» des éléments en html. Et bien souvent, on mélange aussi le code qui pilote le «Modèle» c’est à dire l’obtention des données directement depuis la base de données avec Ado.Net (c’est ce qu’on obtient lorsqu’on fait du WYSIWYG dans Visual Studio en choisissant les Sql Data Source et les glissant-déposant sur l’ihm). Cet anti-modèle (ou anti–pattern) a été souvent pointé du doigt par les architectes et développeurs, car en plus de faire produire du code spaghetti (bien que orienté objet), il rend impossible les tests systématisés (automatisés).« )
  • http://dosimple.ch/articles/MVC-ASP.NET/ (« Dans le framework ASP.NET la vue est un fichier HTML agrémenté de balises ASP. Le contrôleur et le modèle sont en général mélangés dans un objet qui dérive de la classe System.Web.UI.Page. »)
  • http://www.castleproject.org/monorail/index.html (« MonoRail is a MVC Web Framework inspired by Action Pack. MonoRail differs from the standard WebForms way of development as it enforces separation of concerns; controllers just handle application flow, models represent the data, and the view is just concerned about presentation logic. Consequently, you write less code and end up with a more maintainable application. Although the project name is MonoRail, we do not have any affiliation with the Mono project. MonoRail runs on Microsoft .Net 1.1, 2.0 and Mono.« )
  • http://en.wikipedia.org/wiki/ASP.NET_MVC_Framework (« ASP.NET MVC Framework is a Model-view-controller framework which Microsoft is adding to ASP.NET. It allows an application to be built as a composition of three roles: Model, View and Controller. A Model represents the state of a particular aspect in the application. Frequently, a model maps to a database table, with the entries in the table representing the state of the table. A Controller handles interactions and updates the model to reflect a change in state of the application. A View ASP.NET MVC Framework couples the models, views and controllers using interface-based contracts, thereby allowing each component to be easily tested independently. The view engine in the MVC framework uses regular .aspx pages to design the layout of the UI pages onto which the data is composed; however any interactions are routed to the controllers rather than using the postback mechanism. Views can be mapped to REST-friendly URLs. ASP.NET MVC Framework has been launched as a Community Technology Preview on December 10, 2007 extracts necessary information from a model and renders a UI to display that.. »)
  • http://www.hanselman.com/blog/ScottGuMVCPresentationAndScottHaScreencastFromALTNETConference.aspx ( » I attended the ALT.NET Conference last weekend in Austin, TX. I personally find the name « Alt » as in « Alternative » too polarizing and prefer terms like « Pragmatic.NET » or « Agile.NET. » At the conference I suggested, partially in jest, that we call it « NIH.NET » as in « Not Invented Here.NET. » 😉 Ultimately this is a group that believes in:
    • Continuous Learning
    • Being Open to Open Source Solutions
    • Challenging the Status Quo
    • Good Software Practices
    • DRY (Don’t Repeat Yourself)
    • Common Sense when possible.

    ScottGu gave an hour long presentation on the upcoming MVC Framework and I took some guerilla video. ScottGu’s presentation is here in Silverlight and it’s about 60 minutes long. Considering it’s so long, the video squished nicely. This was the first time the MVC Framework was shown publicly. Note that this was a Prototype, not the Production code and both ScottGu and I make that point a number of times to drive home that it’s early. Some of the code was written on a plane, just to give you an idea. After The Gu did his piece on the MVC Framework, I showed some prototype hacking that I’d done over the previous few days along with some work Phil Haack did. My presentation is here as Silverlight and it’s about 30 minutes long. I showed the Model View Controller with Controllers in IronPython and an IronPython view using a WebFormViewEngine. Then I talked about the possibilities of alternate ViewEngines and showed Phil Haack’s prototype RubyViewEngine. »)

  • http://weblogs.asp.net/scottgu/archive/2007/11/13/asp-net-mvc-framework-part-1.aspx ( » ASP.NET MVC Framework (Part 1) I’m going to use a simple e-commerce store application to help illustrate how the ASP.NET MVC Framework works. For today’s post I’ll be implementing a product listing/browsing scenario in it. Specifically, we are going to build a store-front that enables end-users to browse a list of product categories when they visit the /Products/Categories URL on the site>>Since it adds a testing project, does this require Team System? No – the good news is that Test Projects with VS 2008 are now supported with the VS 2008 Professional edition – and no longer require team system. You will also be able to use the VS Express and Standard edition products as well, and then use NUnit, MBUnit or some other testing framework with them (we’ll ship project templates for these as well…>> 2. Is it also possible to have this URL mapped: /Products/Beverages/3 instead of /Products/Beverages?page=3 or does it _need_ the parameter name? Yes – this is totally supported. Just set a route rule for /Products/Beverages with the format /Products/<Category>/<Page> – they’ll then get passed in as arguments to the action method:public List(string category, int? page) {}>>Also, some ideas and general remarks: 1. I’d love to see more helper methods for use in the views, I like how Ruby on Rails has many shortcuts for creating forms and formfields etc. Yes – we’ll have a rich library of helper methods for the views. They’ll include helpers to create all the standard forms, formfields and more.

>>2. Migrations! Not a part of MVC but from my experience with Ruby on Rails I would love to see support for this somewhere, anywhere! It would fit perfectly with the more agile way of developing that is possible with ASP.NET MVC. Rob Conery is building .NET Migrations support as part of the SubSonic project, and recently joined Microsoft. You’ll be able to use this with ASP.NET MVC

>>I’m also very keen to get my hands on the CTP. Scott, you mention, using Inversion of Control containers with the MVC framework. I’d be very interested in seeing a blog post on this subject. Also, will there be a sample application (with tests and IoC) available alonside the CTP? We have a IControllerFactory extensiblity point that owns creating Controller instances. You can use this with a IOC container to customize any Controller instance prior to it being called by the MVC framework. We’ll be posting samples of how to use this with ObjectBuilder and Windsor with the first CTP I believe

>> Very cool! One thing I’d like to see guidance on is developing MVC apps with multiple UIs. You say here that it’s best to put your models and controllers in the web app, but say we want a Winforms, WPF, Silverlight, and Web UI all doing similar things. Or a Mobile Web UI and Desktop Web UI… Would these still each need their own Models and Controllers, or does it make sense to have one library that they all share? If so, how is that supported? I’m still new to MVC, so if I’m missing something conceptually here, tell me! That is a good question, and one we’ll need to provide more guidance on in the future. In general it is often hard to share the same controller across both client and web UI, since the way you do data access and the stateful/stateless boundaries end up being very different. It is possible – but some guidance here would ceretainly be useful. My team should hopefully be coming out with this in the future

>>I really appreciate this material. Do you support the MVC pattern over the MVP pattern? Or are there just better scenarios for using each? The above approach I showed uses a MVC based pattern – where the Controller and the View tend to be more separate and more loosly coupled. In a MVP pattern you typically drive the UI via interfaces. This works well with a controls model, and makes a lot of sense with both WinForms and WebForms where you are using a postback model. Both MVC and MVP are perfectly fine approaches to use. We are coming out with the MVC approach for ASP.NET partly because we’ve seen more demand for it recently for web scenarios…

>> When can we expect a similar chapter with SubSonic as the DAL and scaffolding provider? (see http://oakleafblog.blogspot.com/2007/11subsonic-will-be-toolset-for-microsofts.html I’ll be covering scaffolding in a future blog post. LINQ to SQL scaffolding is built-in with ASP.NET MVC and doesn’t require SubSonic. SubSonic will also obviously be adding new ASP.NET MVC features as well.

>> My applications in .NET works with « 3 layers » pattern (business logic and data access in your own dll). How can i use this wonderfull MVC with my Models (data access) and Controllers (B.Logic)?? Because if i’m not reuse this, i’ve repeat code in every layer; then this MVC is not DRY (don’t repeat yourself) and the community don’t accept. There is no need to put your business and data logic in the same assembly as your controllers. You can split them up across multiple class library projects if you prefer

>> Does this mean that with MVC, we no longer use LinqDatasource in the View section? While the LinqDataSource control will technically work in MVC Views, if you are using a MVC model you wouldn’t want to place it there. Instead you want to perform all of your data and application logic in the Controller layer – and then pass the needed data to the view.

>> Perhaps I missed it somehow, but can you explain on which version of asp.net will this ctp run? The MVC framework builds on top of .NET 3.5

>> Scott, this is amazing timing! The URL mapping features inherit in an MVC application are PERFECT for the upgrade to ASP.NET 3.5 I’m making to my spelldamage.com site. Hosting question, will hosts simply need to support the ASP.NET 3.5 framework to allow us to run ASP.NET MVC applications? Your hoster will need to support .NET 3.5 for the MVC support.

>> Is’nt the MVC framework, in fact the Controller, implementation of the Front Controller pattern? Yes – the ASP.NET MVC framework uses a front-controller pattern. »)

  • http://weblogs.asp.net/scottgu/archive/2007/12/03/asp-net-mvc-framework-part-2-url-routing.aspx (« Last month I blogged the first in a series of posts I’m going to write that cover the new ASP.NET MVC Framework we are working on. The first post in this series built a simple e-commerce product listing/browsing scenario. It covered the high-level concepts behind MVC, and demonstrated how to create a new ASP.NET MVC project from scratch to implement and test this e-commerce product listing functionality. In today’s blog post I’m going to drill deeper into the routing architecture of the ASP.NET MVC Framework, and discuss some of the cool ways you can use it for more advanced scenarios in your application… What does the ASP.NET MVC URL Routing System do? The ASP.NET MVC framework includes a flexible URL routing system that enables you to define URL mapping rules within your applications. The routing system has two main purposes:
    • Map incoming URLs to the application and route them so that the right Controller and Action method executes to process them
    • Construct outgoing URLs that can be used to call back to Controllers/Actions (for example: form posts, <a href= » »> links, and AJAX calls)

Having the ability to use URL mapping rules to handle both incoming and outgoing URL scenarios adds a lot of flexibility to application code. It means that if we want to later change the URL structure of our application (for example: rename /Products to /Catalog), we could do so by modifying one set of mapping rules at the application level – and not require changing any code within our Controllers or View templates.

>> BTW, can you explain in short about the Active Record Type support in our MVC.

.NET 3.5 has LINQ to SQL built-in – which is a great ORM. LINQ to Entities will also be built-into .NET 3.5 in the future (it also ships with the MVC setup). The MVC framework doesn’t require LINQ to SQL or LINQ to Entities as the data model – it also works with NHibernate, LLBLGen, SubSonic, DataSets, DataReaders, and/or any other data model with .NET. We will, though, provide out of the box scaffolding support for LINQ to SQL and LINQ to Entities that delivers nice integration with the MVC support.

>> Question : What is this MockHttpContext in the UnitTest class? I mean, i can guess what it is .. but is it new in .NET 3.5 or the MVC framework? The MockHttpContext is an example mock object. It isn’t currently built-in with the MVC framework (with the first preview you’d need to create this yourself). I’m hoping that before the MVC framework ships, though, that there are a built-in set of mock objects and test helpers that you can use out of the box (without having to mock them yourself). Note that all core contracts and types with the MVC framework are mockable (non-sealed and interfaces). So you can also use any existing Mock framework to test it (RhinoMocks, TypeMock, etc).

>> 1) Is it possible to use routes with a file extension if I don’t want to enable wildcard script mappings? For example, /Products/List/Beverages.rails or /Products/Detail/34.rails ? Yes – this is fully supported. We recommend changing your route rule to /[controller].mvc/[action]/[id]. When you install the ASP.NET MVC Framework we automatically register this .mvc extension – although you are free to use any other one you want.

>> Hypothetically, do you think it would be possible to customise the route creation process so that route data could be gathered from attributes on actions? We don’t currently support this built-in, but hypothetically you could load all the controllers at app-startup and use reflection on them to calculate the route rules. This would enable the scenario you are after.

>> Is it possible to use IronRuby as the coding language to target the MVC framework?? Yes – we’ll support both IronRuby and IronPython with the ASP.NET MVC Framework.

>> I am liking this more and more. I can see how the routing configuration code in global.asax.cs could become quite large. In my application, I can see dozens, maybe hundreds of unique routing rules. Is there any way this configuration can be put in a file somewhere? That file would be read on Application start. Seems like that would make deployment of routing changes easier, too. We don’t currently have a pre-built file format for reading in mapping rules. But it should be relatively easy to build (you could even use LINQ to XML to read in and construct the rules).

>> Just out of curiosity where in the HttpApplication cycle are the routing rules evaluated and are they exposed in any way to the rest of the HttpApplication cycle? My current security system grants permissions at what would become the controller-action level so if the route determination is made early enough I’d really like to drive my authorization off of it.
The routing rules are resolved after the OnPostMapRequestRequestHandler event. This means that it will happen *after* your authorization rules evaluate. The good news is that this means you should be able to re-use your existing security system as-is.

>> Will there be any way to use an XML document to create the routing rules outside of the Global.asax code? Yep – this scenario will be supported.

>> I noticed that in Django, you have to repeat yourself kind of often when you have a deep nested hierarchy of pages. Your search & search-results pages seem to be continuing that trend. I’m sure I could come up with some hierarchical data structure which can be serialized into Route objects, but is the ASP.NET team planning anything along those lines that would come stock?

With our first preview the Route type is not extensible (you can’t subclass it) – which means you sometimes need to register multiple routes for a series of things. For the next preview we are looking at enabling Route to be sub-classed – in which case you could easily encapsulate multiple URL mappings into a single registration.

>> Have you thought about being able to define a regular expression for url matching and using backreferences or named captures as the tokenized url? I think this would allow for much more flexibility while keeping the list of routing rules down to a minimum.

Yes – this is something we are looking at. The challange with regular expressions is that only a subset of people really understand them (and importantly – understand how to optimize them). But I agree that having this as an option would be super flexible.

>> I must say that i am still worried about having to leave all the knowledge that we ave until now with webforms and start a new technology and still think that we would need some kind of a bridge to close the gap between today solution of webforms and tomorrow solution of MVC. Although the way you structure application flow will be different, I think you’ll be pleasantly surprised by the amount of knowledge overlap that exists. Authentication, Authorization, Caching, Configuration, Compilation, Session State, Profile Management, Health Monitoring, Administration, Deployment, and many, many other things are exactly the same. MVC views are also .aspx pages (that use .ascx user controls and .master files). So the concept re-use is quite heavy there as well. »)

  • http://weblogs.asp.net/scottgu/archive/2007/12/06/asp-net-mvc-framework-part-3-passing-viewdata-from-controllers-to-views.aspx (« The last few weeks I have been working on a series of blog posts that cover the new ASP.NET MVC Framework we are working on. The ASP.NET MVC Framework is an optional approach you can use to structure your ASP.NET web applications to have a clear separation of concerns, and make it easier to unit test your code and support a TDD workflow. The first post in this series built a simple e-commerce product listing/browsing site. It covered the high-level concepts behind MVC, and demonstrated how to create a new ASP.NET MVC project from scratch to implement and test this e-commerce product listing functionality. The second post in this series drilled deep into the URL routing architecture of the ASP.NET MVC framework, and discussed both how it worked as well as how you can handle more advanced URL routing scenarios with it. In today’s blog post I’m going to discuss how Controllers interact with Views, and specifically cover ways you can pass data from a Controller to a View in order to render a response back to a client. In Part 1 of this series, we created an e-commerce site that implemented basic product listing/browsing support. We implemented this site using the ASP.NET MVC Framework, which led us to naturally structure the code into distinct controller, model and view components. When a browser sends a HTTP request to our web site, the ASP.NET MVC Framework will use its URL routing engine to map the incoming request to an action method on a controller class to process it. Controllers in a MVC based application are responsible for processing incoming requests, handling user input and interactions, and executing application logic based on them (retrieving and updating model data stored in a database, etc). When it comes time to render an HTML response back to the client, controllers typically work with « view » components – which are implemented as separate classes/templates from the controllers, and are intended to be focused entirely on encapsulating presentation logic. Views should not contain any application logic or database retrieval code, instead all application/data logic should only be handled by the controller class. The motivation behind this partitioning is to help enforce a clear separation of your application/data logic from your UI generation code. This makes it easier to unit test your application/data logic in isolation from your UI rendering logic.Views should only render their output using the view-specific data passed to it by the Controller class. In the ASP.NET MVC Framework we call this view-specific data « ViewData ». The rest of this blog post is going to cover some of the different approaches you can use to pass this « ViewData » from the Controller to the View to render.

>> One question Scott: Let say i’m jumpng on MS MVC bandwagon, do i have to abandon asp.net Page Life cycle godddies, should i set up my mind to different approach. When you use the ASP.NET MVC approach you’ll want to have all post in your site go to your Controller. This helps ensure that the application can be easily tested and preserves the separation of concerns. This is different from using ASP.NET server controls which postback to themselves (which is very powerful too – but means that it can sometimes be slightly harder to test). What doesn’t change is that all of other ASP.NET pieces (forms authentication, role management, session state, caching, profiles, configuration, compilation, httprequest/response, health monitoring, etc, etc) works with both web forms based pages and MVC based pages. MVC UI are also built with .aspx, .master, and .ascx files – so there is a high level of skill re-use there as well.

>> Some asp.net controls require <form runat=server>. If we use a asp:dropdownlist for example, we have to place in the asp:form. And this means viewstate comes back! Is there any way to get rid of hidden form fields? Or you suggest that we do must use classic HTML controls ? I’ll cover this more in my next blog in this series. We have helpers that can automate generating things like dropdownlists, etc. Think of them as controls for MVC. These don’t use viewstate and give you a little more control over the output of your HTML.

>> It would be nice if you can just compare a bit our MVC with ROR. Within ROR, we can create tables, Columns and Rows with Ruby, without using a single line of SQL, and all its done through ActiveRecord. In short all CRUD advantages.

>> SubSonic is almost in this line. Can you explain more in this line or how SubSonic can be used to take ActiveRecord Type advantages.

RoR is made up of several components.

« Action Controller » is the name of the MVC framework that Rails uses. That is the closest analogy to the ASP.NET MVC Framework – and at a high-level the ASP.NET MVC framework uses many of the same core concepts (URLs map to controller actions). The ASP.NET MVC Framework has a few additional features (like the ability to map incoming parameters directly to action method parameters). It is also more explicit about calling RenderView within the request.

« Active View » is the name of the View engine that Rails uses. This is analagous to the .aspx/.master/.ascx infrastructure ASP.NET has. Our view engine is IMO a little richer, and supports several additional features (declarative controls/components, templating, multiple regions for nested master pages, WYSIWYG designer, strongly typed ViewData access, pluggable storage provider, declarative localization, etc).

« Active Record » is the name of the ORM (object relational mapper) that RoR uses. This is analagous to an ORM in the .NET world – whether it is LINQ to SQL, LINQ to Entities, LLBLGen, SubSonic, NHibernate, ActiveRecord (the .NET version), etc. All of these ORMs have ways to map rows and columns in a database to objects that a developer then manipulates and works against (and can save changes back). The LINQ to SQL ORM is built-in to .NET 3.5 today, and LINQ to Entities will be built-in with .NET 3.5 SP1.

I think from your follow-up question above you are referring specifically to the « migrations » feature in RoR – which allows you to define table schemas using code, and version the schemas forward/backward. There isn’t a built-in version of this in the core .NET Framework today – although the SubSonic project has recently released a version that allows you to-do this using .NET. You can find out more about it here: www.subsonicproject.com.

>> Superb, …but I still longing for the IOC integration (Spring.NET, Windsor, StructureMap)…maybe in the next post ? I am planning on posting about IOC and dependency injection in the future (the ASP.NET MVC framework fully supports it and can be used with Spring.NET, Windsor, StructureMap, etc). I have a few other more basic MVC tutorials I need to get out of the way first – but testing and dependency injection are on the list for the future.

>> Nice series, eagerly waiting to see AJAX approach in the new ASP.NET MVC Framework, so when shall we expext that.
We will have ASP.NET AJAX support with the ASP.NET MVC Framework. I’ll talk more about this in a future tutorial series post.

>> This is coming together nicely. I was just hoping you might explain why there are 3 different ways to pass the ViewData? Does each method offer something that the others don’t? If not, surely it would be best to choose one method and force all developers to follow the same practice? Conceptually there are two ways to pass viewdata – either as a dictionary, or as a strongly typed object. In general I recommend using the strongly typed dictionary approach – since this gives you better compilation checking, intellisense, and refactoring support. There are, however, some times when having a more latebound dictionary approach ends up being flexible for more dynamic scenarios« )

  • http://blog.wekeroad.com/2007/10/26/microsoft-subsonic-and-me/ (« Rather than try and come up with some lame metaphors and trite pop-culture references, I’ve decided to use some advice from English 101 Professor:

    “Whatever the hell you’re trying to say, just say it”

    So I will: I’m going to work for Microsoft. I just signed the offer letter. I’ll be working with the ASP.NET guys on the new MVC platform as well as some other groovy things like Silverlight. I get to work “across the hall” from one of my very good friends – Phil Haack. I think it’s worth pointing out that SubSonic hasn’t been “bought”. Some might smell a conspiracy here, but I’ll leave that to the X-Files and Cap’n Crunch crowd to drum up all the evil reasons why the mothership has “beamed me up”. SubSonic will remain under the same MPL 1.1 license it always has, and will remain as completely Open Source as it always has – nothing will change at all. I’m just getting paid, essentially, to work on it 🙂...This is crucial to me. I decided to be direct with him and make sure we both understood these important points:

    “I want to be sure I have complete creative control over SubSonic, and that you don’t censor my blog… is that cool?”

    Shawn’s response is why I took the job:

    “Well Duh…” (he added some more things that were a bit more eloquent than “duh” – but I don’t think I was listening).

    I can make jokes about the UAC on my blog? And make up fictional Matrix converstations with ScottGu? Sign me up! I start on November 12th, right after DevConnections in Vegas (come on by if you’re out that way at the DNN Open Force« )

  • http://www.subsonicproject.com/ (« A Super High-fidelity Batman Utility Belt. SubSonic works up your DAL for you, throws in some much-needed utility functions, and generally speeds along your dev cycle. Why SubSonic ? Because you need to spend more time with your friends, family, dog, bird, cat… whatever. You work too much. Coding doesn’t need to be complicated and time-consuming. »)
  • http://en.wikipedia.org/wiki/Test-driven_development (« Test-Driven Development (TDD) is a software development technique consisting of short iterations where new test cases covering the desired improvement or new functionality are written first, then the production code necessary to pass the tests is implemented, and finally the software is refactored to accommodate changes. The availability of tests before actual development ensures rapid feedback after any change. Practitioners emphasize that test-driven development is a method of designing software, not merely a method of testing. Test-Driven Development began to receive publicity in the early twenty-first century as an aspect of Extreme Programming, but more recently is creating more general interest in its own right. Along with other techniques, the concept can also be applied to the improvement and removal of software defects from legacy code that was not developed in this way . »)
  • ss
  • http://weblogs.asp.net/scottgu/archive/2007/12/09/asp-net-mvc-framework-part-4-handling-form-edit-and-post-scenarios.aspx
  • (« ASP.NET MVC Framework (Part 4): Handling Form Edit and Post Scenarios   The last few weeks I have been working on a series of blog posts that cover the new ASP.NET MVC Framework we are working on.  The ASP.NET MVC Framework is an optional approach you can use to structure your ASP.NET web applications to have a clear separation of concerns, and make it easier to unit test your code and support a TDD workflow. The first post in this series built a simple e-commerce product listing/browsing site.  It covered the high-level concepts behind MVC, and demonstrated how to create a new ASP.NET MVC project from scratch to implement and test this e-commerce product listing functionality.  The second post drilled deep into the URL routing architecture of the ASP.NET MVC framework, and discussed both how it worked as well asthird post discussed how Controllers interact with Views, and specifically covered ways you can pass view data from a Controller to a View in order to render a response back to a client.  In today’s blog post I’m going to discuss approaches you can use to handle form input and post scenarios using the MVC framework, as well as talk about some of the HTML Helper extension methods you can use with it to make data editing scenarios easierClick here to download the source code for the completed application we are going to build below to explain these concepts… Our Data Model. We are going to use the SQL Server Northwind Sample Database to store our data.  We’ll then use the LINQ to SQL object relational mapper (ORM) built-into .NET 3.5 to model the Product, Category, and Supplier objects that represent rows in our database tables. We’ll begin by right-clicking on our /Models sub-folder in our ASP.NET MVC project, and select « Add New Item » -> « LINQ to SQL Classes » to bring up the LINQ to SQL ORM designer and model our data objects…To learn more about LINQ and LINQ to SQL, please check out my LINQ to SQL series here The HtmlHelper object (as well as the AjaxHelper object – which we’ll talk about in a later tutorial) have been specifically designed to be easily extended using « Extension Methods » – which is a new language feature of VB and C# in the VS 2008 release.  What this means is that anyone can create their own custom helper methods for these how you can handle more advanced URL routing scenarios with it.  The objects and share them for you to use. We’ll have dozens of built-in HTML and AJAX helper methods in future previews of the ASP.NET MVC Framework.  In the first preview release only the « ActionLink » method is built-into System.Web.Extensions (the assembly where the core ASP.NET MVC framework is currently implemented).  We do, though, also have a separate « MVCToolkit » download that you can add to your project to obtain dozens more helper methods that you can use with the first preview release.
  • >> This is nice. But do you handle subcomponents? subviews? Is view reuse possible inside other view? This is very important for even for modest sized applications? How would you handle posts correcly inside of subviews 😉 ?The first public MVC preview doesn’t have the concept of SubControllers yet (where you can attach them for regions of a larger view).  That is something we have planned for to tackle soon though.

    >> However, liviu raises good questions about Sub-Controllers, Composite Views and AJAX scenarios. How Will the MVC framework address the complex wiring of different parts of a page to different controllers for async updates?? My brain hurts thinking about it, yet I have implement these portal pages all the time cos our customers demand it. The first ASP.NET MVC Preview allows you to call RenderView (for example: RenderView(« Edit »)) and have it apply to either a .aspx viewpage or a .ascx viewusercontrol.  This ends up being really useful for scenarios where you are doing AJAX callbacks and want to refresh a portion of a page.  These callbacks could either be to the same Controller that rendered the origional page, or to another Controller in the project. We don’t have all the AJAX helper methods yet for this – but you could roll them yourself today.  We’ll obviously be releasing an update that does include them in the future.

  • >> Can you provide an example of how we might use a server side control to generate the drop down list or a textbox (a la WebForms) along with the pro’s and con’s of each approach? Personally, I do not like the old ASP <%= … %> syntax; but if there is a good reason to use it then I am willing to change that opinion. The main reason I showed the <% %> syntax in this post was that we don’t have the server-side MVC control equivalents available yet for this dropdown and text scenario.  That is on our roadmap to-do though – at which point you don’t need to necessarily use the <% %> syntax. In general what we’ve found is that some developers hate the <% %> syntax, while others prefer it (and feel that controls get in the way).  We want to make both sets of developers happy, and will have options for both. 🙂
  • >> Great work on the MVC model, it is a clean and intuitive model that appears to scale well. My only recommendation is that your actions (verbs) appear after the ids (nouns), e.g. [Controller]/[Id]/[Action]/  That would keep the architecture noun focused, opposed to verb focused. With this approach you can easily build an administrative interface on a public read-only site by adding verbs such as /edit/ or /delete/ at the end of your URL. A simple security rule can be added to the Controller that says ignore all Verbs after a noun, such as category or product, if user is not part of an administrative group. Thanks for the suggestion Josh.  You can do this today by creating your own custom route registration.  In the next preview I believe Route will be extensible so that you could also create a single resource route that supports these semantics.>> I was wondering if we could get an idea of the official « roadmap » for asp.net mvc?  Specifically, I’d like to know what features the team plans to release on what dates … ultimately, when this thing will RTM.  As I’ll be evaluating mvc and monorail for an upcoming app, this kind of information would really be helpful in determining which direction to go. We don’t have a formally published roadmap just yet – we will hopefully publish one early next year though.« )
  • http://en.wikipedia.org/wiki/ASP.NET_AJAX (« ASP.NET AJAX, formerly code-named Atlas, is a set of extensions to ASP.NET developed by Microsoft for implementing Ajax functionality. Including both client-side and server-side components, ASP.NET AJAX allows the developer to create web applications in ASP.NET 2.0 (and to a limited extent in other environments) which can update data on the web page without a complete reload of the page. The key technology which enables this functionality is the XMLHttpRequest object, along with Javascript and DHTML. ASP.NET AJAX was released as a standalone extension to ASP.NET in January 2007 after a lengthy period of beta-testing. It was subsequently included with version 3.5 of the .NET Framework, which was released alongside Visual Studio 2008 in November 2007.« )

Posted in Architecture logicielle, javascript, ruby, design pattern, ASP.NET, tests, Web Frameworks, active record, DotNet, IDE-GUI, ORM, Ironpython, Développement logiciel, open source, RIA, AJAX, Acces aux données, 2007 | Tagué: , , , , , , , | Leave a Comment »

Quelques outils pour développer une application open source: Une application open source basée sur JEE 5 et JBoss Seam: Nuxeo5

Posted by Noam sur novembre 7, 2007

Les applications Java sont souvent employées dans des environnements assez lourds. Cette situation cependant s’améliore grâce à l’utilisation d’outils open source utilisés dans d’autres projets et également grâce à la mise en oeuvre de  serveurs d’applications Java EE 5 qui implémentent les fonctionnalités extrêmement attendues telles qu’Enterprise JavaBeans 3.0 (EJB3), Java Persistence API (JPA) et JavaServer Faces (JSF).. Voulant aussi aussi parler des logiciels libres en entreprise, on s’intéressera à Nuxeo 5 qui utilise Jboss Seam en tant que composant de son framework web.

ECM (Enterprise Content Management) : la gestion de contenu (en anglais Enterprise Content Management, ECM) vise à gérer l’ensemble des contenus d’une entreprise. Il s’agit de prendre en compte les informations sous forme électronique, qui ne sont pas structurées, comme les documents électroniques, par opposition à celles déjà structurées dans les bases de données. À titre d’exemple, on va pouvoir gérer l’ensemble des informations d’un dossier client : courriers papier, courriels, fax, contrats, etc., dans une même infrastructure)

Nuxeo 5 est une plateforme complète de gestion de contenu d’entreprise, robuste et extensible, développée selon un modèle de logiciel libre par la société Nuxeo et une communauté de contributeurs, en utilisant des technologies Java EE open source.La plateforme Nuxeo couvre l’ensemble du spectre fonctionnel et technique de l’ECM :- Gestion documentaire (GED)- Travail collaboratif– Gestion des processus métiers (workflow documentaire)- Gestion de la conformité légale ou réglementaire

– Gestion des documents d’archives (Records Management)

– Gestion des contenus multimédias

Gestion des connaissances (KM)

Historique de Nuxeo 5: « …La finalisation de Java Enterprise Edition 5.0 (Java EE 5) était attendue en 2007 et Nuxeo voulait être en mesure de tirer parti de nouvelles fonctionnalités extrêmement attendues telles qu’Enterprise JavaBeans 3.0 (EJB3), Java Persistence API (JPA) et JavaServer Faces (JSF). Le projet de migration vers Java EE 5 a été lancé avec pour objectif de livrer la plateforme Nuxeo 5 au 4ème trimestre 2006..

En ce qui concerne Java open source, un seul nom s’est imposé : JBoss.
« Le choix de JBoss Application Server fut évident car il se trouve au
cœur d’un package logiciel open source dont nous avions déjà testé
plusieurs modules indispensables à notre projet, »

…Nuxeo 5 utilise:

– JBoss Cache pour fournir le stockage temporaire distribué de la plateforme ECM pour les données d’accès fréquent,

– JBoss jBPM pour fournir la gestion des processus commerciaux et les flux de travaux,

– JBoss Rules pour permettre la création de règles commerciales,

– et JBoss Seam, qui est une structure innovante de programmation de composants, pour fournir une couche Web dynamique et extensible qui unifie les fonctionnalités Java EE5 telles qu’EJB3 et JSF, ainsi que les technologies Web 2.0 telles qu’Asynchronous JavaScript et XML (Ajax).

…Enfin, en tant que fournisseur de solutions open source, Nuxeo a
apprécié le processus de développement ouvert et collaboratif de
JBoss, qui a bien accueilli la participation et la contribution des clients,
des partenaires et des particuliers. Ceci est l’antithèse du logiciel
commercial, dont le développement se fait à huis clos et qui est fourni
dans une boîte noire
. Grâce à son expérience utilisateur avec JBoss
Enterprise Middleware, Nuxeo est en mesure de contribuer à la direction
du développement de JBoss. Et pour finir, cela signifie avoir droit à la
parole et être entendu.

Autres sources:

http://www.michaelyuan.com/blog/2006/11/14/seam-without-ejb3/ (« Seam has always supported POJO components in addition to EJB3 components. You can use Seam POJOs to replace EJB3 session beans and Hibernate POJOs to replace EJB3 entity beans« )

http://www.jboss.com/products/seam ou – http://labs.jboss.com/jbossseam/ (« JBoss Seam is a powerful new application framework for building next generation Web 2.0 applications by unifying and integrating technologies such as Asynchronous JavaScript and XML (AJAX), Java Server Faces (JSF), Enterprise Java Beans (EJB3), Java Portlets and Business Process Management (BPM). Seam has been designed from the ground up to eliminate complexity at the architecture and the API level. It enables developers to assemble complex web applications with simple annotated Plain Old Java Objects (POJOs), componentized UI widgets and very little XML.« )

http://labs.jboss.com/jbossejb3/ (« Enterprise Java Beans (EJB) 3.0 is a deep overhaul and simplification of the EJB specification. EJB 3.0’s goals are to simplify development, facilitate test driven development, and focus more on writing plain old java objects (POJOs) rather than on complex EJB APIs. EJB 3.0 has fully embraced Java Annotations introduced in JDK 5.0 and also simplifies the API for CMP entity beans by using Hibernate as the EJB 3.0 Java Persistence engine.« )

http://www.michaelyuan.com/blog/about/ (« Welcome to my blog site! My name is Michael Yuan. I am a technologist, author, and open source advocate based in Austin, Texas. I currently work as a Technology Evangelist at the JBoss division of Red Hat Inc. Before joining JBoss, I was an independent software consultant in the field of mobile end-to-end solutions« ) et son flux RSS (http://www.michaelyuan.com/blog/category/seam/feed/)

http://www.redhat.com/developers/rhds/index.html (« Red Hat Developer Studio is a set of eclipse-based development tools that are pre-configured for JBoss Enterprise Middleware Platforms and Red Hat Enterprise Linux. Developers are not required to use Red Hat Developer Studio to develop on JBoss Enterprise Middleware and/or Red Hat Linux. But, many find these pre-configured tools offer significant time-savings and value, making them more productive and speeding time to deployment« )

– Learn more about Seam here, and find a list of commonly answered questions here. Or, follow this road map to get started with Seam right away! You can view a recorded Seam webinar.

http://blogs.nuxeo.com/sections/blogs/fermigier/2007_03_08_nuxeo-s-open-source-projects-trully-community-driven-hell-yes-they (« Fortunately, the short answer is “of course we are community-driven”. With Dion’s criteria, I can confidently self-grade us at A+ (or 20/20, for french-educated people). Here are the criteria and my comments)

http://www.nuxeo.org/sections/documentation/ (« Learn Nuxeo EP 5.1 basics with some video demos« )

http://www.nuxeo.org/sections/community/ (« The Nuxeo projects are open source (licensed under the LGPL) and developed with the participation of the community. We mean « participation » here as either: working as a core developer, contributor, third-party component developer (we have designed Nuxeo 5 to be very easily extensible by independent people, to create an « architecture of participation »), tests (unit tests, integration tests or functional tests) writer, Maven / Eclipse specialist, documentation writer / proofreader, etc.« )

Here is a list of what you can do to get involved with the Nuxeo 5 development:

http://maven.nuxeo.org/ (« This website is the Apache Maven site for the Nuxeo EP As stated on the nuxeo.org site, « Nuxeo 5 is an innovative, standards-based, open source platform for ECM applications. Its component-based and service-oriented architecture makes it easy to customize and extend, making developers more efficient and ultimately, happier ». For more general information about the project, we strongly suggest that you go to the Nuxeo.org website. What you will find on this site are highly technical, developers-focussed, information related to the project. These information are generated by the build tool we use (and we recommend to third-parties that which to use or extend the platform), Apache Maven, from the source code and meta-information developers put in the source. open source ECM platform. »)

http://maven.nuxeo.org/source-repository.html (« This project uses Subversionhttp://svnbook.red-bean.com/ to manage its source code. Instructions on Subversion use can be found at . »)

– Accès aux sources

  • Anonymous access

The source can be checked out anonymously from SVN with this command:

$ svn checkout http://svn.nuxeo.org/nuxeo/nuxeo-ep/trunk nuxeo-ecm
  • Developer access

Everyone can access the Subversion repository via HTTPS, but Committers must checkout the Subversion repository via HTTPS.

$ svn checkout https://svn.nuxeo.org/nuxeo/nuxeo-ep/trunk nuxeo-ecm

http://in.relation.to/Bloggers/Seam2IsOut (« …the most important thing about the new release is simply that the codebase is much cleaner. The migration to JSF 1.2 allowed us to solve many problems and remove quite a few hacks. We also repackaged built-in components according to a much more logical schema…« )

http://en.wikipedia.org/wiki/Apache_Maven (« Maven is a software tool for Java programming language project management and automated software build. It is similar in functionality to the Apache Ant tool (and to a lesser extent, PHP’s PEAR and Perl’s CPAN), but has a simpler build configuration model, based on an XML format. Maven is hosted by the Apache Software Foundation, where it was formerly part of the Jakarta Project. Maven uses a construct known as a Project Object Model (POM) to describe the software project being built, its dependencies on other external modules and components, and the build order…« )

http://fr.wikipedia.org/wiki/JBoss (« JBoss Application Server est un serveur d’applications J2EE Libre entièrement écrit en Java, publié sous licence LGPL. Parce que le logiciel est écrit en Java, JBoss Application Server peut être utilisé sur tout système d’exploitation fournissant une machine virtuelle Java (JVM). Les développeurs du cœur de JBoss ont tous été employés par une société de services appelée « JBoss Inc. ». Celle-ci a été créée par Marc Fleury, le concepteur de la première version de JBoss. Le projet est sponsorisé par un réseau mondial de partenaires et utilise un business model fondé sur le service. En avril 2006, Red Hat a racheté JBoss Inc. En février 2007 Marc Fleury quitte le groupe Red Hat. JBoss Application Server implémente entièrement l’ensemble des services J2EE. Cela inclue JBoss Portal, JBoss Seam, Tomcat et les frameworks Hibernate, jBPM, et Rules. »)

http://fr.wikipedia.org/wiki/Subversion_(logiciel)  (« Subversion (en abrégé svn) est un système de gestion de versions, distribué sous licence Apache et BSD. Il a été conçu pour remplacer CVS. Ses auteurs s’appuient volontairement sur les mêmes concepts (notamment sur le principe du dépôt centralisé et unique) et considèrent que le modèle de CVS est le bon, et que seule son implémentation est en cause. Le projet a été lancé en février 2000 par CollabNet, avec l’embauche par Jim Blandy de Karl Fogel, qui travaillait déjà sur un nouveau gestionnaire de version. »)

http://en.wikipedia.org/wiki/Enterprise_JavaBean (« Enterprise Java Bean is a managed, server-side component architecture for modular construction of enterprise applications.The EJB specification is one of the several Java APIs in the Java Platform, Enterprise Edition. The EJB specification was originally developed in 1997 by  IBM and later adopted by Sun Microsystems (EJB 1.0 and 1.1) and enhanced under the Java Community Process as JSR 19 (EJB 2.0), JSR 153 (EJB 2.1) and JSR 220 (EJB 3.0). The EJB specification intends to provide a standard way to implement the back-end ‘business’ code typically found in enterprise applications (as opposed to ‘front-end’ user-interface code). Such code was frequently found to reproduce the same types of problems, and it was found that solutions to these problems are often repeatedly re-implemented by programmers. Enterprise Java Beans were intended to handle such common concerns as persistence, transactional integrity, and security in a standard way, leaving programmers free to concentrate on the particular problem at hand.)

http://en.wikipedia.org/wiki/Java_Persistence_API (« …The Java Persistence API was defined as part of the EJB 3.0 specification, which is itself part of the Java EE 5 platform..The Java Persistence API is designed for relational persistence, with many of the key areas taken from object-relational mapping tools such as Hibernate and TopLink. It is generally accepted that the Java Persistence API is a significant improvement on the EJB 2.0 specification…Many enterprise Java developers have been using lightweight persistent objects provided by open-source frameworks or Data Access Objects instead of entity beans because entity beans and enterprise beans were considered too heavyweight and complicated, and they could only be used in Java EE application servers. Many of the features of the third-party persistence frameworks were incorporated into the Java Persistence API, and projects like Hibernate and TopLink are now implementations of the Java Persistence API…« )

http://en.wikipedia.org/wiki/Red_Hat (« Red Hat, Inc. (NYSERHT) is one of the larger and more recognized companies dedicated to open source software. It is also the largest distributor of the Linux operating system. Red Hat was founded in 1995 and has its corporate headquarters in Raleigh, North Carolina. The company is best known for its enterprise-class operating system, Red Hat Enterprise Linux and more recently through the acquisition of open source enterprise middleware vendor JBoss. Red Hat provides operating system platforms along with middleware, applications, and management solutions, as well as support, training, and consulting services. »)

Posted in AJAX, Architecture logicielle, Gestion de version, java, JEE, open source, Web applications, Web Frameworks, web2.0 | Tagué: , , , , , , , , | Leave a Comment »

Mapping objet relationnel avec NHibernate, SqlAlchemy et RubyOnRails

Posted by Noam sur août 17, 2007

D’abord quelques définitions:

http://fr.wikipedia.org/wiki/Object-relational_mapping (« L’object-relational mapping (ORM), que l’on pourrait traduire par « correspondance entre monde objet et monde relationnel » est une technique de programmation informatique qui crée l’illusion d’une base de données orientée objet à partir d’une base de données relationnelle en définissant des correspondances entre cette base de données et les objets du langage utilisé« )

http://fr.wikipedia.org/wiki/Hibernate (« Hibernate est un framework open source gérant la persistance des objets en base de données relationnelle. Hibernate est adaptable en terme d’architecture, il peut donc être utilisé aussi bien dans un développement client lourd, que dans un environnement web léger de type Apache Tomcat ou dans un environnement J2EE complet : WebSphere, JBoss Application Server et WebLogic de BEA Systems (voir (en) BEA Weblogic).

…NHibernate est un framework open source gérant la persistance des objets en base de données relationnelle. Il est l’implémentation .NET de Hibernate qui a vu le jour en Java« )

Le patron de conception ActiveRecord

http://fr.wikipedia.org/wiki/Active_record_%28patron_de_conception%29 (« En génie logiciel, le patron de conception (design pattern) active record est une approche pour lire les données d’une base de données. Les attributs d’une table ou d’une vue sont encapsulés dans une classe. Ainsi l’objet, instance de la classe, est lié à un tuple de la base. Après l’instanciation d’un objet, un nouveau tuple est ajouté à la base au moment de l’enregistrement. Chaque objet récupère ses données depuis la base; quand un objet est mis à jour, le tuple auquel il est lié l’est aussi. La classe implémente des accesseurs pour chaque attribut« )

http://www.theserverside.com/tt/articles/article.tss?l=RailsHibernate (« ActiveRecord is « an object that wraps a row in a database table or view, encapsulates database access and adds domain logic on that data »[Fowler, 2003]. This means the ActiveRecord has « class » methods for finding instances, and each instance is responsible for saving, updating and deleting itself in the database. It’s pretty well suited for simpler domain models, those where the tables closely resemble the domain model. It is also generally simpler then the more powerful, but complex Data Mapper pattern« )

Le patron de conception DataMapper

http://martinfowler.com/eaaCatalog/dataMapper.html (« Objects and relational databases have different mechanisms for structuring data. Many parts of an object, such as collections and inheritance, aren’t present in relational databases. When you build an object model with a lot of business logic it’s valuable to use these mechanisms to better organize the data and the behavior that goes with it. Doing so leads to variant schemas; that is, the object schema and the relational schema don’t match up….The Data Mapper is a layer of software that separates the in-memory objects from the database. Its responsibility is to transfer data between the two and also to isolate them from each other. With Data Mapper the in-memory objects needn’t know even that there’s a database present; they need no SQL interface code, and certainly no knowledge of the database schema« )

http://www.theserverside.com/tt/articles/article.tss?l=RailsHibernate (« The Data Mapper is « a layer of mappers that moves data between objects and a database while keeping them independent of each other and the mapper itself »[Fowler, 2003]. It moves the responsibility of persistence out of the domain object, and generally uses an identity map to maintain the relationship between the domain objects and the database. In addition, it often (and Hibernate does) use a Unit of Work (Session) to keep track of objects which are changed and make sure they persist correctly.« )

http://en.wikipedia.org/wiki/SQLAlchemy (« SQLAlchemy is an open source SQL toolkit and object-relational mapper for the Python programming language released under the MIT License. SQLAlchemy provides « a full suite of well known enterprise-level persistence patterns, designed for efficient and high-performing database access, adapted into a simple and Pythonic domain language ». SQLAlchemy’s philosophy is that SQL databases behave less and less like object collections the more size and performance start to matter, while object collections behave less and less like tables and rows the more abstraction starts to matter. For this reason it has (like Hibernate for Java) adopted the Data Mapper pattern rather than the active record pattern used by a number of other object-relational mappers. »)

Le patron de conception UnitOfWork

http://martinfowler.com/eaaCatalog/unitOfWork.html (« When you’re pulling data in and out of a database, it’s important to keep track of what you’ve changed; otherwise, that data won’t be written back into the database. Similarly you have to insert new objects you create and remove any objects you delete…A Unit of Work keeps track of everything you do during a business transaction that can affect the database. When you’re done, it figures out everything that needs to be done to alter the database as a result of your work« )

http://www.sqlalchemy.org/features.html (« The Unit Of Work system, a central part of SQLAlchemy’s Object Relational Mapper (ORM), organizes pending create/insert/update/delete operations into queues and flushes them all in one batch. To accomplish this it performs a topological « dependency sort » of all modified items in the queue so as to honor foreign key constraints, and groups redundant statements together where they can sometimes be batched even further. This produces the maxiumum efficiency and transaction safety, and minimizes chances of deadlocks. Modeled after Fowler’s « Unit of Work » pattern as well as Hibernate, Java’s leading object-relational mapper.[More]« )

Comparaison de Hibernate avec le patron de conception ActiveRecord de Ruby On Rails

Différence d’architecture logicielle:

– http://www.theserverside.com/tt/articles/article.tss?l=RailsHibernate (« The core difference between Rails ActiveRecord and Hibernate is the architectural patterns the two are based off of. Rails, obviously, is using the ActiveRecord pattern, where as Hibernate uses the Data Mapper/Identity Map/Unit of Work patterns. Just knowing these two facts gives us some insight into potential differences…general implication of which should be fairly obvious. The ActiveRecord (Rails) will likely be easier to understand and work with, but past a certain point more advanced/complex usages will likely be difficult or just not possible. The question is of course, when or if many projects will cross this line. Let’s look at some specifics of the two frameworks. To illustrate these differences, we will be using code from my « Project Deadwood » sample app. (Guess what I’ve been watching lately. 🙂

Exemples de code:

Soit une table créée avec l’ordre SQL suivant:

create table miners (

   id BIGINT NOT NULL AUTO_INCREMENT,

   first_name VARCHAR(255),

   last_name VARCHAR(255),

   primary key (id)

)

Avec RubyOnRails:

Your corresponding ruby class (miner.rb) and sample usage looks like this.

class Miner < ActiveRecord::Base

endminer.first_name = "Brom"

Avec Hibernate:

On the other hand, your Hibernate class (Miner.java) specifies the fields, getters/setters and xdoclet tags looks like so.

/**  * @hibernate.class table="miners"  */
public class Miner
{
private String firstName;
private String lastName;     /**      * @hibernate.id generator-class="native"      */
public Long getId() { return id; }
public void setId(Long id) { this.id = id; }      /**      * @hibernate.property column="first_name"      */
public String getFirstName() { return firstName; }
public void setFirstName(String firstName) { this.firstName = firstName; }      /**      * @hibernate.property column="last_name"      */
public String getLastName() { return lastName; }
public void setLastName(String lastName) { this.lastName = lastName; }
}
miner.setFirstName("Brom");

Avec sqlalchemy:

# describe a table called 'miner',  query the database for its columns

miners_table = Table('miners', meta, autoload=True, autoload_with=engine)
>>> class Miner(object):

...     def __init__(self, first_name, last_name ):

...         self.first_name = first_name

...         self.last_name = last_name

...

...     def __repr__(self):

...        return "<Miner(%r,%r)>" % (self.first_name, self.last_name)
mapper(Miner, miners_table) 
theMiner = Miner(first_name = "Brom")

Associations

http://www.theserverside.com/tt/articles/article.tss?l=RailsHibernate (« In the last section, the Miner class we looked at was single table oriented, mapping to a single miners table. ORM solutions support ways to map associated tables to in memory objects, Hibernate and Rails are no different. Both handle the most of the basic mapping strategies. Here’s a non-exhaustive list of association supported by both of them, including the corresponding Hibernate – Rails naming conventions where appropriate.

  • Many to One/One to one – belongs_to/has_one
  • One to Many (set) – has_many
  • Many to Many (set) – has_and_belongs_to_many »)

Exemples d’associations

« As a comparative example, lets look at the many to one relationship. We are going to expand our Deadwood example from part I. We add to the Miner a many to one association with a GoldClaim object. This means there is a foreign key, gold_claim_id in the miners table, which links it to a row in the gold_claims table. »

Avec Hibernate:

(Java)

public class Miner {

   // Other fields/methods omitted    private GoldClaim goldClaim;

    /**

     * @hibernate.many-to-one column="gold_claim_id"

     *         cascade="save"

     */

    public GoldClaim getGoldClaim() { return goldClaim; }

    public void setGoldClaim(GoldClaim goldClaim) {

        this.goldClaim = goldClaim;

    }

}

Avec RubyOnRails:

(Rails)

class Miner < ActiveRecord::Base

    belongs_to :gold_claim

end

Avec Sqlalchemy :

parent_table = Table('parent', metadata,

    Column('id', Integer, primary_key=True),

    Column('child_id', Integer, ForeignKey('child.id')))



child_table = Table('child', metadata,

    Column('id', Integer, primary_key=True),

    )



class Parent(object):

    pass

 class Child(object):

    pass

 mapper(Parent, parent_table, properties={

    'child':relation(Child)

})



mapper(Child, child_table)

Pour aller plus loin

http://www.castleproject.org/activerecord/gettingstarted/index.html ( » The Castle ActiveRecord project is an implementation of the ActiveRecord pattern for .NET. The ActiveRecord pattern consists on instance properties representing a record in the database, instance methods acting on that specific record and static methods acting on all records… Complex databases structures (legacy databases mostly) usually are not covered by the range of mapping features supported by ActiveRecord. In this case you might consider using NHibernate directly« )

http://www.sqlalchemy.org/docs/choose.html (« La documentation de sqlalchemy »)

http://en.wikipedia.org/wiki/Nhibernate (« NHibernate is an Object-relational mapping (ORM) solution for the Microsoft .NET platform. It is licensed under the GNU Lesser General Public License.It is a port of the popular Java O/R mapper Hibernate to .NET. Version 1.0 mirrors the feature set of Hibernate 2.1, adding a number of features from Hibernate 3.

NHibernate 1.2.0, released May of 2007, introduces many more features from Hibernate 3 and support for .NET 2.0, stored procedures, generics and nullable types« )

http://www.hibernate.org/343.html (« NHibernate 1.2 introduces many additional features from Hibernate 3. Details about the new features can be found in this blog post. »)

http://blog.hibernate.org/cgi-bin/blosxom.cgi (« The groupblog of the Hibernate Team »)

http://blog.hibernate.org/cgi-bin/blosxom.cgi/Sergey%20Koshcheyev/nhibernate12-is-here.html (« NHibernate 1.2 is not a drop-in replacement for NHibernate 1.0. Before you jump to upgrading all your applications to NHibernate 1.2, note that there are several things that can break. The migration guide in the Wiki contains information about all the changes you need to make to successfully migrate...The next major release of NHibernate will drop support for .NET 1.1. We will continue porting the remaining Hibernate 3 features over to NHibernate. »)

http://www.ayende.com/Blog/category/510.aspx (« Les principales nouvelles sur NHibernate« )

http://www.ayende.com/Blog/ (Blog principal de Ayende @ Rahien)

http://www.hibernate.org/362.html (« NHibernate Quick Start Guide

Posted in Architecture logicielle, DotNet, Nhibernate, ORM | 2 Comments »

RESTful Resource Oriented Architecture (ROA), HTTP, URI, XML

Posted by Noam sur juillet 11, 2007

Analyse du livre « Restful web services » de Léonard Richardson et Sam Ruby, Copyright 2007, O’Reilly Media, Inc., ISBN-13: 978-0-596-52926-0.

Pour les exemples du livre on a besoin de:

– les sources du livre (http://examples.oreilly.com/9780596529260/)

Les sources du livre

– comme je suis au travail sous Windows, j’utilise l’émulateur Cygwin pour avoir un environnement de type UNIX.
– les langages semi-interprétés python, ruby, php et les langage C# et java

– différentes bibliothèques logicielles suivant les langages:

  • la bibliothèque Ruby/Amazon (Attention: « Before you can use this library, you need to obtain an Amazon Web Services developer token« ).
    • Pour l’installer
      • $ ruby setup.rb config
        —> lib
        —> lib/amazon
        —> lib/amazon/search
        —> lib/amazon/search/exchange
        <— lib/amazon/search/exchange
        <— lib/amazon/search
        <— lib/amazon
        <— lib
      • $ ruby setup.rb setup
        —> lib
        —> lib/amazon
        —> lib/amazon/search
        —> lib/amazon/search/exchange
        <— lib/amazon/search/exchange
        <— lib/amazon/search
        <— lib/amazon
        <— lib
      • $ ruby setup.rb install
        —> lib
        mkdir -p /usr/lib/ruby/site_ruby/1.8/
        install amazon.rb /usr/lib/ruby/site_ruby/1.8/
        —> lib/amazon
        mkdir -p /usr/lib/ruby/site_ruby/1.8/amazon
        install search.rb /usr/lib/ruby/site_ruby/1.8/amazon
        install wishlist.rb /usr/lib/ruby/site_ruby/1.8/amazon
        install weddingregistry.rb /usr/lib/ruby/site_ruby/1.8/amazon
        install babyregistry.rb /usr/lib/ruby/site_ruby/1.8/amazon
        install transaction.rb /usr/lib/ruby/site_ruby/1.8/amazon
        install locale.rb /usr/lib/ruby/site_ruby/1.8/amazon
        install shoppingcart.rb /usr/lib/ruby/site_ruby/1.8/amazon
        —> lib/amazon/search
        mkdir -p /usr/lib/ruby/site_ruby/1.8/amazon/search
        install seller.rb /usr/lib/ruby/site_ruby/1.8/amazon/search
        install blended.rb /usr/lib/ruby/site_ruby/1.8/amazon/search
        install exchange.rb /usr/lib/ruby/site_ruby/1.8/amazon/search
        install cache.rb /usr/lib/ruby/site_ruby/1.8/amazon/search
        —> lib/amazon/search/exchange
        mkdir -p /usr/lib/ruby/site_ruby/1.8/amazon/search/exchange
        install marketplace.rb /usr/lib/ruby/site_ruby/1.8/amazon/search/exchange
        install thirdparty.rb /usr/lib/ruby/site_ruby/1.8/amazon/search/exchange
        <— lib/amazon/search/exchange
        <— lib/amazon/search
        <— lib/amazon
        <— lib
  • la commande s3sh pour Ruby (http://amazon.rubyforge.org/ , AWS::S3 A Ruby Library for Amazon’s Simple Storage Service’s (S3) REST API).
    • pve@pc72 /cygdrive/e/Temp/rubygems-0.9.4/rubygems-0.9.4
      $ gem i aws-s3 -ry
      Bulk updating Gem source index for: http://gems.rubyforge.org
      Successfully installed aws-s3-0.4.0
      Successfully installed xml-simple-1.0.11
      Successfully installed builder-2.1.2
      Successfully installed mime-types-1.15
      Installing ri documentation for aws-s3-0.4.0…
      Installing ri documentation for builder-2.1.2…
      Installing ri documentation for mime-types-1.15…
      Installing RDoc documentation for aws-s3-0.4.0…
      Installing RDoc documentation for builder-2.1.2…
      Installing RDoc documentation for mime-types-1.15…
      pve@pc72 /cygdrive/e/Temp/rubygems-0.9.4/rubygems-0.9.4

Test du premier programme ruby:

#!/usr/bin/ruby -w
# amazon-book-search.rb
require ‘amazon/search’

if ARGV.size != 2
puts « Usage: #{$0} [Amazon Web Services AccessKey ID] [text to search for] »
exit
end
access_key, search_request = ARGV

req = Amazon::Search::Request.new(access_key)
# For every book in the search results…
req.keyword_search(search_request, ‘books’, Amazon::Search::LIGHT) do |book|
# Print the book’s name and the list of authors.
puts %{« #{book.product_name} » by #{book.authors.join(‘, ‘)}}
end

  • $ ruby amazon-book-search.rb 126116STACCESSID « restful web services »
    « RESTful Web Services » by Leonard Richardson, Sam Ruby, David Heinemeier Hansson

Bon j’ai détaillé ce premier exemple car je n’avais encore jamais travaillé avec Ruby. Donc quittons cet aspect très pratique pour entrer dans l’aspect plus théorique du livre.

Les auteurs introduisent la notion de « Web programmable » qui est basée sur d’abord et avant tout sur le protocole HTTP et XML (éventuellement HTML, JSON, du texte et du binaire). « If you don’t use HTTP, you’re not on the web« . Il y a donc tout un paragraphe (« HTTP: Documents in Envelopes ») sur HTTP. On y apprend que à part les méthodes GET et POST que tout le monde connait 🙂 il y a les méthodes beaucoup plus méconnues que sont HEAD, PUT et DELETE sans compter OPTIONS, TRACE et CONNECT. La méthode ‘GET’ permet de rechercher une ressource, la commande ‘DELETE’ permet de la détruire tandis que la commande ‘PUT’ permet de la mettre à jour.

On en vient à l’exemple de recherche de photos sur flickr (http://www.flickr.com/services/api/keys/apply. On sait qu’on fait une recherche de ressources grâce à l’emploi du nom de la méthode ‘flickr.photos.search’ qui fait référence à la méthode HTTP GET.

Il y a une comparaison entre les 2 URIS équivalentes suivantes:

http://flickr.com/photos/tags/penguin –> la méthode est ‘GET’ et l’information de portée (scoping information) est ‘les photos taggés penguin’

-http://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=xxx&tags=penguin (dans le bouquin ils ont oublié de mettre &api_key=xxx) –> la méthode est ‘recherche une photo’ et l’information de portée est ‘penguin’.

Il n’y a pas de différence technique entre les 2 URIs mais il y a une différence au niveau de l’architecture si on emploie une méthode comme ‘flickr.photos.delete’ qui utilisera la méthode HTTP GET à une place qui ne lui était pas destinée (‘into places it wasn’t meant to go’). On a une architecture REST-RPC hybride (p.16)

A partir de là (« RESTful, Resource-Oriented Architectures(ROA), page 13), les auteurs définissent ce qu’ils entendent par de véritables services web Restful: ce sont des services qui resemblent au web et ils les appellent ‘Orientés ressource’ (resource oriented’).

Dans des architectures RESTful ‘Orientées Ressource’ l’information de portée est dans l’URI et l’architecture est RESTful car la méthode d’information est la méthode HTTP. Si l’information de portée (scoping information) n’est pas dans l’URI alors le service n’est pas ‘Orienté Ressource’.

Les services suivants sont considérés comme étant ‘RESTFul, orientés ressource‘:

tandis que les suivants sont considérés comme REST-RPC hybrid:

  • l’API del.icio.us
  • l’API « REST » de flickr
  • la plupart des applications web

Fin du chapitre 1 (ch1)

Dans le chapitre 2 (« Writing Web Service Clients ») on confirme que l’API del.icio.us n’utilise que la méthode HTTP ‘GET’ quel que soit le service demandé ! Le chapitre 7 « A service implementation » montrera ce qu’est un service RESTful pour le service del.icio.us.

Le but de ce chapitre est d’écrire des clients del.icio.us à l’aide de différents langages et d’utiliser en conséquence différentes bibliothèques permettant de faire des requêtes HTTP. Ces bibliothèques doivent respecter ces caractéristiques:

– le support de HTTPS et du certificat SSL

– implémenter au moins les 5 méthodes HTTP suivantes: GET, HEAD, POST, PUT et DELETE. On rappelle que malheureusement certaines bibliothèques ne supportent que GET et POST et parfois même uniquement GET ! Les méthodes comme OPTIONS , TRACE et la méthode MOVE du protocole Webdav sont un bonus.

– doit permettre au programmeur de manipuler les données envoyées lors d’une requête PUT ou POST

– doit permettre au programmeur de modifier les headers HTTP

– doit donner au programmeur au code retour et aux headers HTTP de la réponse

– doit pouvoir communiquer au travers d’un serveur mandataire HTTP.

– etc.

La bibliothèque standard de Ruby (open-uri) ne supporte que la méthode ‘GET’. Sam Ruby a écrit rest-open-uri

$ gem install rest-open-uri
Successfully installed rest-open-uri-1.0.0
Installing ri documentation for rest-open-uri-1.0.0…
Installing RDoc documentation for rest-open-uri-1.0.0…

Pour Python, il y a urllib2 (semblable à open-uri), httplib et l’excellente bibliothèque de Joe Gregorio httplib2 (http://bitworking.org/projects/httplib2) supportant pratiquement toutes les caractéristiques d’une bonne bibliothèqueHTTP.

pve@pc72 /cygdrive/e/Temp/httplib2-0.3.0
$ python setup.py install
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/httplib2
copying httplib2/__init__.py -> build/lib/httplib2
copying httplib2/iri2uri.py -> build/lib/httplib2
running install_lib
creating /usr/lib/python2.5/site-packages/httplib2
running install_egg_info
Writing /usr/lib/python2.5/site-packages/httplib2-0.3.0-py2.5.egg-info

Je passe sur Java et C#. Pour PHP, il y a libcurl; pour Javascript XMLHttpRequest (voir chapitre 11).

En ligne de commande on a cURL (http://curl.haxx.se) qui est livré en standard sous cygwin. Utiliser l’option -v de cURL permet d’avoir des informations très intéressantes.

Pour info: lynx est aussi un outil intéressant pour les apllications web.
On passe ensuite aux parsers XML

– Ruby: REXML(interfaces DOM et SAX, support XPath). Attention: ne rejette pas le XML mal formé. Autres bibliohèques: libxml2 du projet GNOME, hpricot

$ gem install hpricot
Select which gem to install for your platform (i386-cygwin)
1. hpricot 0.6 (mswin32)
2. hpricot 0.6 (jruby)
3. hpricot 0.6 (ruby)
4. hpricot 0.5 (ruby)
5. hpricot 0.5 (mswin32)
6. Skip this gem
7. Cancel installation
> 3
Building native extensions. This could take a while…
Successfully installed hpricot-0.6
Installing ri documentation for hpricot-0.6…
Installing RDoc documentation for hpricot-0.6…

– Python: ElementTree (support XPath limité), 4Suite (support complet de XPath, http://4suite.org), …etc.

Pour le format JSON, voir le site http://www.json.org.

On finit le chapitre par WADL. Voir le site http://tomayac.de/rest-describe/latest/RestDescribe.html.

Fin du chapitre 2.

Chapitre 3, « What makes Restful services different ? »

Les auteurs rappellent que les APIs del.icio.us et Flickr marchent comme le web lorsqu’il s’agit de rechercher des données mais que ce sont des services de style RPC lorsqu’il s’agit de modifier des données. Les différents services de recherche Yahoo! sont très RESTful mais ils sont si simples qu’il ne peuvent servir d’exemples. Par contre le protocole de publication Atom (APP) et le service de stockage simple d’Amazon (S3) décrit dans ce chapitre sont RESTful et orientés ressources. Amazon fournit des biblothèques d’accès à ce service pour différents langages et outils (comme cURL) (http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=47)

Le service S3 RESTful expose toutes les fonctionnalités du service style RPC mais au lieu de le faire avec des noms de fonction spécialisées il expose des objets standards HTTP appelés ‘ressources‘. Au lieu de répondre à des noms de fonctions comme ‘getObjects’ une ressource répond à une ou plusieurs des 4 méthodes HTTP standards: GET, HEAD, PUT et DELETE.

– Pour avoir la valeur d’un objet on envoie la requête ‘GET’ sur l’URI de cet objet

– pour obtenir uniquement les métadonnées de cet objet on envoie la requête ‘HEAD’ sur la même URI

– pour créer un ‘bucket’, on envoie la requête ‘PUT’ sur l’URI de l’objet contenant le nom du bucket

– pour détruire un ‘bucket’ ou un autre objet, on envoie la requête ‘DELETE’ sur l’URI correspondante.

Les concepteurs de S3 n’ont rien fait d’extraordinaire. Suivant le standard HTTP c’est pour faire ce genre de choses que les méthodes GET, HEAD, PUT et DELETE ont été créées !

Les codes de réponse HTTP pour S3 vont de 200 (« OK »), 404 (« NotFound »). La plus commune est 403 (« Forbidden »). S3 utilise aussi les codes 400 (« Bad request ») indiquant que le serveur ne comprend pas les données envoyées par le client, 409 (« Conflict ») envoyé au client qui essaye de détruire un bucket qui n’est pas vide.

Ensuite on crée une bibliothèque Ruby pour impléménter un client S3. Le but est d’illustrer la théorie derrière REST. Au lieu d’utiliser des noms de méthodes spécifiques, on utilisera des noms qui reflèteront l’architecture RESTful: get, put, delete etc… Les auteurs créent un module nommé S3::Authorized dans le fichier S3lib.rb qu’il faut modifier pour entrer sa clé publique et sa clé privée (« Enter your public key (Amazon calls it an « Access Key ID ») and your private key (Amazon calls it a « Secret Access Key ». This is so you can sign your S3 requests and Amazon will know who to charge »)

Je vérifie avec curl la validité de mes clés.

pve@pc72 ~
$ cat > pubkey.txt <<EOF
> 1261ffffffffff8X19802
> EOF

pve@pc72 ~
$ cat > key.txt <<EOF
> tmaTMgnMiyyyyyyyyyyypMCV8O9JdxQ4
> EOF
$ curl -v –key key.txt –pubkey pubkey.txt https://s3.amazonaws.com/
* About to connect() to s3.amazonaws.com port 443 (#0)
* Trying 72.21.203.129… connected
* Connected to s3.amazonaws.com (72.21.203.129) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: /usr/share/curl/curl-ca-bundle.crt
CApath: none
* SSLv2, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using RC4-MD5
* Server certificate:
* subject: /C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=s3.amazonaw
s.com
* start date: 2007-02-20 00:00:00 GMT
* expire date: 2008-02-20 23:59:59 GMT
* common name: s3.amazonaws.com (matched)
* issuer: /C=US/O=RSA Data Security, Inc./OU=Secure Server Certification
Authority
* SSL certificate verify ok.
> GET / HTTP/1.1
> User-Agent: curl/7.16.3 (i686-pc-cygwin) libcurl/7.16.3 OpenSSL/0.9.8e zlib/1.
2.3 libssh2/0.15-CVS
> Host: s3.amazonaws.com
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< x-amz-id-2: PyFBRRXn6ohqSTY/OdVyE54dsts1bEsUPYluggqw67RWYEhGBrDl3Ru55piRhIgf
< x-amz-request-id: 4546F4FDF40D80AF
< Date: Wed, 11 Jul 2007 10:27:39 GMT
< Location: http://aws.amazon.com/s3
< Content-Length: 0
< Server: AmazonS3
<
* Connection #0 to host s3.amazonaws.com left intact
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):

J’ai un problème avec le client ruby:

pve@pc72 /cygdrive/e/projets/REST/ruby/ch3
$ ruby s3-sample-client.rb buck1 obj1 5
/usr/lib/ruby/1.8/net/http.rb:567: warning: using default DH parameters.
/usr/lib/ruby/gems/1.8/gems/rest-open-uri-1.0.0/lib/rest-open-uri.rb:320
n_http’: 403 Forbidden (OpenURI::HTTPError)
from /usr/lib/ruby/gems/1.8/gems/rest-open-uri-1.0.0/lib/rest-op
b:659:in `buffer_open’
……
b:162:in `open_uri’
from /usr/lib/ruby/gems/1.8/gems/rest-open-uri-1.0.0/lib/rest-op
b:561:in `open’
from /usr/lib/ruby/gems/1.8/gems/rest-open-uri-1.0.0/lib/rest-op
b:35:in `open’
from ./S3lib.rb:276:in `open’
from ./S3lib.rb:45:in `get’
from s3-sample-client.rb:13

Ensuite on a un exemple d’utilisation de Ruby on rails pour la création de services RESTful. On installe donc Ruby on rails avec la commande ‘gem install rails’

$ gem install rails
Need to update 13 gems from http://gems.rubyforge.org
………….
complete
Install required dependency rake? [Yn] Y
Install required dependency activesupport? [Yn] Y
Install required dependency activerecord? [Yn] Y
Install required dependency actionpack? [Yn] Y
Install required dependency actionmailer? [Yn] Y
Install required dependency actionwebservice? [Yn] Y
Successfully installed rails-1.2.3
Successfully installed rake-0.7.3
Successfully installed activesupport-1.4.2
Successfully installed activerecord-1.15.3
Successfully installed actionpack-1.13.3
Successfully installed actionmailer-1.3.3
Successfully installed actionwebservice-1.2.3
Installing ri documentation for rake-0.7.3…
Installing ri documentation for activesupport-1.4.2…
Installing ri documentation for activerecord-1.15.3…
Installing ri documentation for actionpack-1.13.3…
Installing ri documentation for actionmailer-1.3.3…
Installing ri documentation for actionwebservice-1.2.3..
Installing RDoc documentation for rake-0.7.3…
Installing RDoc documentation for activesupport-1.4.2…
Installing RDoc documentation for activerecord-1.15.3…
Installing RDoc documentation for actionpack-1.13.3…
Installing RDoc documentation for actionmailer-1.3.3…
Installing RDoc documentation for actionwebservice-1.2.3

pve@pc72 /cygdrive/e/projets/REST/ruby/ch3

On apprend que:

– dans une application Rails, le modèle et le contrôleur du modèle MVC peuvent se comporter comme un service web RESTful. L’interface d’ActiveResource est analogue à l’interface d’ActiveRecord: avec ActiveRecord les objets sont dans une base de données et sont exposés au moyen de SQL avec les commandes SELECT, INSERT, UPDATE et DELETE. Avec ActiveResource ils sont au sein d’une application Rails, exposés au moyen de HTTP avec les commandes GET, POST, PUT et DELETE.

En conclusion, un clone de S3 en Ruby : http://code.whytheluckystiff.net/parkplace

Chapitre 4: The Resource-Oriented Architecture (ROA)

L’architecture ROA est une façon de transformer un problème en service web RESTful: un ensemble d’URIs, d’HTTP et de XML qui marchent comme le reste du web et que les programmeurs aimeront 🙂

Les services Web RESTful peuvent être classés suivant les réponses à 2 questions:

– l’information de portée est stockée dans l’URI : (pourquoi le serveur enverrait cette information plutôt qu’une autre ?) C’est le principe d’adressabilité.

– la méthode d’information est la méthode HTTP (pourquoi le serveur devrait-il envoyer cette donnée au lieu de la détruire ?)

Pour les auteurs, REST n’est pas une architecture (ce qui est en contradiction avec la définition donnée par Wikipedia) mais un ensemble de critères de conception. Je ne vais pas plus loin dans leur définition car elle est en contradiction totale de ce que je lis sur Wikipedia. Les auteurs estiment que REST n’a pas de rapport direct avec HTTP et les URIs.

Quelques définitions:

– à une ressource doit correpondre au moins une URI. L’URI est le nom et l’adresse de la ressource. Je fais remarquer que le U dans URI veut dire Uniform et non Universal et ce depuis décembre 1994 (voir http://tools.ietf.org/html/rfc1738)

– une URI devrait être descriptive et avoir une structure (c’est donc une convention à adopter)

– 2 ressources ne peuvent avoir la même URI

– une ressource peut avoir 1 ou plusieurs URI

– une URI désigne par définition une et une seule ressource

– une application est adressable si elle expose ses données en tant que ressources

– le fait d’avoir une URI pour chaque ressource permet l’utilisation de mandataires HTTP (proxies HTTP) pour la mise en cache des ressources

– une requête HTTP est sans état (statelessness)

– une ressource est une source de représentations multiples

– pour désigner des représentations différentes les auteurs recommandent de leur attribuer une URI distincte. Exemple: ajouter un .es, .en, .fr suivant la langue. Une alternative est la négociation du contenu (content negociation, voir les champs HTTP header: Accept-Language et Accept).

Ici les auteurs abordent la notion d’interface uniforme:

– pour lire une ressource on utilise la requête ‘GET

   The GET method means retrieve whatever information (in the form of an
   entity) is identified by the Request-URI. If the Request-URI refers
   to a data-producing process, it is the produced data which shall be
   returned as the entity in the response and not the source text of the
   process, unless that text happens to be the output of the process.

   The semantics of the GET method change to a "conditional GET" if the
   request message includes an If-Modified-Since, If-Unmodified-Since,
   If-Match, If-None-Match, or If-Range header field. A conditional GET
   method requests that the entity be transferred only under the
   circumstances described by the conditional header field(s). The
   conditional GET method is intended to reduce unnecessary network
   usage by allowing cached entities to be refreshed without requiring
   multiple requests or transferring data already held by the client.

   The semantics of the GET method change to a "partial GET" if the
   request message includes a Range header field. A partial GET requests
   that only part of the entity be transferred, as described in section
   14.35. The partial GET method is intended to reduce unnecessary
   network usage by allowing partially-retrieved entities to be
   completed without transferring data already held by the client.

   The response to a GET request is cacheable if and only if it meets
   the requirements for HTTP caching described in section 13

– pour supprimer une ressource on utilise la requête ‘DELETE’

   The DELETE method requests that the origin server delete the resource
   identified by the Request-URI. This method MAY be overridden by human
   intervention (or other means) on the origin server. The client cannot
   be guaranteed that the operation has been carried out, even if the
   status code returned from the origin server indicates that the action
   has been completed successfully. However, the server SHOULD NOT
   indicate success unless, at the time the response is given, it
   intends to delete the resource or move it to an inaccessible
   location.

   A successful response SHOULD be 200 (OK) if the response includes an
   entity describing the status, 202 (Accepted) if the action has not
   yet been enacted, or 204 (No Content) if the action has been enacted
   but the response does not include an entity.

   If the request passes through a cache and the Request-URI identifies
   one or more currently cached entities, those entries SHOULD be
   treated as stale. Responses to this method are not cacheable.

– pour créer ou modifier une ressource utiliser la requête ‘PUT‘ associée à l’URI

   The PUT method requests that the enclosed entity be stored under the
   supplied Request-URI. If the Request-URI refers to an already
   existing resource, the enclosed entity SHOULD be considered as a
   modified version of the one residing on the origin server. If the
   Request-URI does not point to an existing resource, and that URI is
   capable of being defined as a new resource by the requesting user
   agent, the origin server can create the resource with that URI. If a
   new resource is created, the origin server MUST inform the user agent
   via the 201 (Created) response. If an existing resource is modified,
   either the 200 (OK) or 204 (No Content) response codes SHOULD be sent
   to indicate successful completion of the request. If the resource
   could not be created or modified with the Request-URI, an appropriate
   error response SHOULD be given that reflects the nature of the
   problem. The recipient of the entity MUST NOT ignore any Content-*
   (e.g. Content-Range) headers that it does not understand or implement
   and MUST return a 501 (Not Implemented) response in such cases.

   If the request passes through a cache and the Request-URI identifies
   one or more currently cached entities, those entries SHOULD be
   treated as stale. Responses to this method are not cacheable.

   The fundamental difference between the POST and PUT requests is
   reflected in the different meaning of the Request-URI. The URI in a
   POST request identifies the resource that will handle the enclosed
   entity. That resource might be a data-accepting process, a gateway to
   some other protocol, or a separate entity that accepts annotations.
   In contrast, the URI in a PUT request identifies the entity enclosed
   with the request -- the user agent knows what URI is intended and the
   server MUST NOT attempt to apply the request to some other resource.
   If the server desires that the request be applied to a different URI,

Fielding, et al.            Standards Track                    [Page 55]
 
RFC 2616                        HTTP/1.1                       June 1999

   it MUST send a 301 (Moved Permanently) response; the user agent MAY
   then make its own decision regarding whether or not to redirect the
   request.

   A single resource MAY be identified by many different URIs. For
   example, an article might have a URI for identifying "the current
   version" which is separate from the URI identifying each particular
   version. In this case, a PUT request on a general URI might result in
   several other URIs being defined by the origin server.

   HTTP/1.1 does not define how a PUT method affects the state of an
   origin server.

– pour retrouver les métadata d’une ressource utiliser la requête ‘HEAD‘. Un client peut utiliser cette requête pour voir si une requête existe.

   The HEAD method is identical to GET except that the server MUST NOT
   return a message-body in the response. The metainformation contained
   in the HTTP headers in response to a HEAD request SHOULD be identical
   to the information sent in response to a GET request. This method can
   be used for obtaining metainformation about the entity implied by the
   request without transferring the entity-body itself. This method is
   often used for testing hypertext links for validity, accessibility,
   and recent modification.

   The response to a HEAD request MAY be cacheable in the sense that the
   information contained in the response MAY be used to update a
   previously cached entity from that resource. If the new field values
   indicate that the cached entity differs from the current entity (as
   would be indicated by a change in Content-Length, Content-MD5, ETag
   or Last-Modified), then the cache MUST treat the cache entry as
   stale.

– pour savoir quelles opérations sont permises sur une ressource employer la requête ‘OPTIONS‘. Bon c’est de la théorie puisque très peu de serveurs offrent ce support.

   The OPTIONS method represents a request for information about the
   communication options available on the request/response chain
   identified by the Request-URI. This method allows the client to
   determine the options and/or requirements associated with a resource,
   or the capabilities of a server, without implying a resource action
   or initiating a resource retrieval.

   Responses to this method are not cacheable.

   If the OPTIONS request includes an entity-body (as indicated by the
   presence of Content-Length or Transfer-Encoding), then the media type
   MUST be indicated by a Content-Type field. Although this
   specification does not define any use for such a body, future
   extensions to HTTP might use the OPTIONS body to make more detailed
   queries on the server. A server that does not support such an
   extension MAY discard the request body.

   If the Request-URI is an asterisk ("*"), the OPTIONS request is
   intended to apply to the server in general rather than to a specific
   resource. Since a server's communication options typically depend on
   the resource, the "*" request is only useful as a "ping" or "no-op"
   type of method; it does nothing beyond allowing the client to test
   the capabilities of the server. For example, this can be used to test
   a proxy for HTTP/1.1 compliance (or lack thereof).

   If the Request-URI is not an asterisk, the OPTIONS request applies
   only to the options that are available when communicating with that
   resource.

   A 200 response SHOULD include any header fields that indicate
   optional features implemented by the server and applicable to that
   resource (e.g., Allow), possibly including extensions not defined by
   this specification. The response body, if any, SHOULD also include
   information about the communication options. The format for such a

Fielding, et al.            Standards Track                    [Page 52]
 
RFC 2616                        HTTP/1.1                       June 1999

   body is not defined by this specification, but might be defined by
   future extensions to HTTP. Content negotiation MAY be used to select
   the appropriate response format. If no response body is included, the
   response MUST include a Content-Length field with a field-value of
   "0".

   The Max-Forwards request-header field MAY be used to target a
   specific proxy in the request chain. When a proxy receives an OPTIONS
   request on an absoluteURI for which request forwarding is permitted,
   the proxy MUST check for a Max-Forwards field. If the Max-Forwards
   field-value is zero ("0"), the proxy MUST NOT forward the message;
   instead, the proxy SHOULD respond with its own communication options.
   If the Max-Forwards field-value is an integer greater than zero, the
   proxy MUST decrement the field-value when it forwards the request. If
   no Max-Forwards field is present in the request, then the forwarded
   request MUST NOT include a Max-Forwards field.

– la requête HTTP ‘POST‘ est la plus mal comprise des méthodes HTTP:

   The POST method is used to request that the origin server accept the
   entity enclosed in the request as a new subordinate of the resource
   identified by the Request-URI in the Request-Line. POST is designed
   to allow a uniform method to cover the following functions:

      - Annotation of existing resources;

      - Posting a message to a bulletin board, newsgroup, mailing list,
        or similar group of articles;

      - Providing a block of data, such as the result of submitting a
        form, to a data-handling process;

      - Extending a database through an append operation.

   The actual function performed by the POST method is determined by the
   server and is usually dependent on the Request-URI. The posted entity
   is subordinate to that URI in the same way that a file is subordinate
   to a directory containing it, a news article is subordinate to a
   newsgroup to which it is posted, or a record is subordinate to a
   database.

   The action performed by the POST method might not result in a
   resource that can be identified by a URI. In this case, either 200
   (OK) or 204 (No Content) is the appropriate response status,
   depending on whether or not the response includes an entity that
   describes the result.

Fielding, et al.            Standards Track                    [Page 54]
 
RFC 2616                        HTTP/1.1                       June 1999

   If a resource has been created on the origin server, the response
   SHOULD be 201 (Created) and contain an entity which describes the
   status of the request and refers to the new resource, and a Location
   header (see section 14.30).

   Responses to this method are not cacheable, unless the response
   includes appropriate Cache-Control or Expires header fields. However,
   the 303 (See Other) response can be used to direct the user agent to
   retrieve a cacheable resource.

Dans une conception RESTful, la méthode ‘POST’ signifie ‘ajouter’ (append) une ressource à une ressource existante. La différence entre ‘PUT’ et ‘POST’ est la suivante: le client utilise la méthode ‘PUT’ lorsque c’est le client qui décide de l’URI de la ressource; le client utilise la méthode ‘POST’ lorsque c’est le serveur qui décide de l’URI qui identifiera la nouvelle ressource. Autrement dit la méthode ‘POST est une façon de créer une nouvelle ressource sans que le client ait la connaissance exacte de sa future URI. Dans la plupart des cas, le client connait l’URI du parent de la ressource. La réponse à ce type de POST est généralement 201 (‘Created’). Une fois que cette ressource exist, l’URI est définie et le client peut utiliser les méthodes ‘PUT’, ‘GET’ et ‘DELETE’.

Mais la signification que 99,9% des développeurs connaissent est le ‘POST’ surchargé: c’est la fameuse requête ‘POST’ employée dans les formulaires HTML: l’information sur ce qui doit être fait est contenue dans la requête HTTP (URI, entêtes HTTP, ou le corps de la requête). C’est d’après les auteurs une chose inévitable…!

– GET et HEAD sont des méthodes sûres (safe).

   Implementors should be aware that the software represents the user in
   their interactions over the Internet, and should be careful to allow
   the user to be aware of any actions they might take which may have an
   unexpected significance to themselves or others.

   In particular, the convention has been established that the GET and
   HEAD methods SHOULD NOT have the significance of taking an action
   other than retrieval. These methods ought to be considered "safe".
   This allows user agents to represent other methods, such as POST, PUT
   and DELETE, in a special way, so that the user is made aware of the
   fact that a possibly unsafe action is being requested.

   Naturally, it is not possible to ensure that the server does not
   generate side-effects as a result of performing a GET request; in
   fact, some dynamic resources consider that a feature. The important
   distinction here is that the user did not request the side-effects,
   so therefore cannot be held accountable for them.

– GET, HEAD, PUT et DELETE sont idempotentes.

   Methods can also have the property of "idempotence" in that (aside
   from error or expiration issues) the side-effects of N > 0 identical
   requests is the same as for a single request. The methods GET, HEAD,
   PUT and DELETE share this property. Also, the methods OPTIONS and
   TRACE SHOULD NOT have side effects, and so are inherently idempotent.

Fielding, et al.            Standards Track                    [Page 51]
RFC 2616                        HTTP/1.1                       June 1999

   However, it is possible that a sequence of several requests is non-
   idempotent, even if all of the methods executed in that sequence are
   idempotent. (A sequence is idempotent if a single execution of the
   entire sequence always yields a result that is not changed by a
   reexecution of all, or part, of that sequence.) For example, a
   sequence is non-idempotent if its result depends on a value that is
   later modified in the same sequence.

   A sequence that never has side effects is idempotent, by definition
   (provided that no concurrent operations are being executed on the
   same set of resources).

– la méthode ‘POST’ n’est ni sûre (safe) ni idempotente.

– les auteurs signalent que la méthode ‘GET’ n’a pas été utilisée correctement sur le Web, cette méthode pouvant être unsafe ! Voir Google et son WebAccelerator de 2005.

– les auteurs finissent en constatant que actuellement les formulaires HTML n’autorisent que POST et GET (p.153 les auteurs utilisent ‘PUT’ et ‘DELETE’ qui font partie de la future norme XHTM5, voir http://cafe.elharo.com/web/why-rest-failed/ )

http://www.w3.org/TR/html401/interact/forms.html#adef-method

method = get|post [CI]
This attribute specifies which HTTP method will be used to submit the form data set. Possible (case-insensitive) values are « get » (the default) and « post ». See the section on form submission for usage information.

http://www.whatwg.org/specs/web-forms/current-work/#methodAndEnctypes

« The HTTP specification defines various methods that can be used with HTTP URIs. Four of these may be used as values of the method attribute: get, post, put, and delete. In this specification, these method names are applied to other protocols as well. This section defines how they should be interpreted.

If the specified method is not one of get, post, put, or delete then it is treated as get in the tables below. »

Chapitre 5: « Designing Read_only Resource-Oriented Services

D’abord on définit son ensemble de données (Data Set) et on décide de découper ses données en ressources. Au lieu de penser en termes d’actions, il faut penser en termes de résultats d’une action.

Ensuite, il faut nommer ces ressources (« Name the Resources »). Il faut se rappeler que dans un service orienté ressources, l’URI contient toutes les informations de portée. Quelques conseils:

– utiliser les chemins pour encoder une hiérarchie: /parent/enfant

– utiliser des caractères de ponctuation pour éviter d’employer à nouveau une hiérarchie: /parent/enfant1;enfant2

– utiliser des variables de recherche pour indiquer que l’on attend des entrées: /search?q=cyliste&name=poupou

Maintenant, il faut décider quelles données renvoyées au client et quels formats employer pour ces données (choix de la représentation) lorsqu’un client demande une ressource. Le principal but d’une représentation est de transmettre l’état de la ressource. Une représentation peut être au format XML (avec un vocabulaire spécifique comme XHTML, ATOM, …etc)

En ce qui concerne les réponses HTTP on aura:

– 200 (« OK)

– le Content-Type de l’entête HTTP sera: application/xhtml+xml pour les résultats de recherche et image/png pour les images.

– les auteurs parlent de la méthode ‘GET’ conditionnelle. Il faut utiliser les champs ‘Last-Modified‘, ‘If-Modified-Since‘, la réponse 304 (« Not Modified ») permettant de ne pas recharger une donnée qui n’a pas changée. Voir Chapitre 8 pour plus de détails.



14.25 If-Modified-Since

   The If-Modified-Since request-header field is used with a method to
   make it conditional: if the requested variant has not been modified
   since the time specified in this field, an entity will not be
   returned from the server; instead, a 304 (not modified) response will
   be returned without any message-body.

       If-Modified-Since = "If-Modified-Since" ":" HTTP-date

Fielding, et al.            Standards Track                   [Page 130]
 
RFC 2616                        HTTP/1.1                       June 1999

   An example of the field is:

       If-Modified-Since: Sat, 29 Oct 1994 19:43:31 GMT

   A GET method with an If-Modified-Since header and no Range header
   requests that the identified entity be transferred only if it has
   been modified since the date given by the If-Modified-Since header.
   The algorithm for determining this includes the following cases:

      a) If the request would normally result in anything other than a
         200 (OK) status, or if the passed If-Modified-Since date is
         invalid, the response is exactly the same as for a normal GET.
         A date which is later than the server's current time is
         invalid.

      b) If the variant has been modified since the If-Modified-Since
         date, the response is exactly the same as for a normal GET.

      c) If the variant has not been modified since a valid If-
         Modified-Since date, the server SHOULD return a 304 (Not
         Modified) response.

   The purpose of this feature is to allow efficient updates of cached
   information with a minimum amount of transaction overhead.

      Note: The Range request-header field modifies the meaning of If-
      Modified-Since; see section 14.35 for full details.

      Note: If-Modified-Since times are interpreted by the server, whose
      clock might not be synchronized with the client.

      Note: When handling an If-Modified-Since header field, some
      servers will use an exact date comparison function, rather than a
      less-than function, for deciding whether to send a 304 (Not
      Modified) response. To get best results when sending an If-
      Modified-Since header field for cache validation, clients are
      advised to use the exact date string received in a previous Last-
      Modified header field whenever possible.

      Note: If a client uses an arbitrary date in the If-Modified-Since
      header instead of a date taken from the Last-Modified header for
      the same request, the client should be aware of the fact that this
      date is interpreted in the server's understanding of time. The
      client should consider unsynchronized clocks and rounding problems
      due to the different encodings of time between the client and
      server. This includes the possibility of race conditions if the
      document has changed between the time it was first requested and
      the If-Modified-Since date of a subsequent request, and the

Fielding, et al.            Standards Track                   [Page 131]
 
RFC 2616                        HTTP/1.1                       June 1999

      possibility of clock-skew-related problems if the If-Modified-
      Since date is derived from the client's clock without correction
      to the server's clock. Corrections for different time bases
      between client and server are at best approximate due to network
      latency.

   The result of a request having both an If-Modified-Since header field
   and either an If-Match or an If-Unmodified-Since header fielfields is
   undefined by this specification

10.3.5 304 Not Modified

If the client has performed a conditional GET request and access is
allowed, but the document has not been modified, the server SHOULD
respond with this status code. The 304 response MUST NOT contain a
message-body, and thus is always terminated by the first empty line
after the header fields.

The response MUST include the following header fields:

- Date, unless its omission is required by section 14.18.1

Fielding, et al. Standards Track [Page 63]

RFC 2616 HTTP/1.1 June 1999

If a clockless origin server obeys these rules, and proxies and
clients add their own Date to any response received without one (as
already specified by [RFC 2068], section 14.19), caches will operate
correctly.

- ETag and/or Content-Location, if the header would have been sent
in a 200 response to the same request

- Expires, Cache-Control, and/or Vary, if the field-value might
differ from that sent in any previous response for the same
variant

If the conditional GET used a strong cache validator (see section
13.3.3), the response SHOULD NOT include other entity-headers.
Otherwise (i.e., the conditional GET used a weak validator), the
response MUST NOT include other entity-headers; this prevents
inconsistencies between cached entity-bodies and updated headers.

If a 304 response indicates an entity not currently cached, then the
cache MUST disregard the response and repeat the request without the
conditional.

If a cache uses a received 304 response to update a cache entry, the
cache MUST update the entry to reflect any new field values given in
the response.

Chapitre 6: « Designing Read/Write Resource-Oriented Services »

Qui dit écriture dit ouverture d’un compte utilisateur qui devient une ressource avec les problèmes d’authentification et d’autorisation associés. Il faut utiliser le champ entête ‘Authorization


14.8 Authorization

A user agent that wishes to authenticate itself with a server-- usually, but not necessarily, after receiving a 401 response--does so by including an Authorization request-header field with the request. The Authorization field value consists of credentials containing the authentication information of the user agent for the realm of the resource being requested. Authorization = "Authorization" ":" credentials HTTP access authentication is described in "HTTP Authentication: Basic and Digest Access Authentication" [43]. If a request is authenticated and a realm specified, the same credentials SHOULD be valid for all other requests within this realm (assuming that the authentication scheme itself does not require otherwise, such as credentials that vary according to a challenge value or using synchronized clocks). When a shared cache (see section 13.7) receives a request containing an Authorization field, it MUST NOT return the corresponding response as a reply to any other request, unless one of the following specific exceptions holds: 1. If the response includes the "s-maxage" cache-control directive, the cache MAY use that response in replying to a subsequent request. But (if the specified maximum age has passed) a proxy cache MUST first revalidate it with the origin server, using the request-headers from the new request to allow the origin server to authenticate the new request. (This is the defined behavior for s-maxage.) If the response includes "s- maxage=0", the proxy MUST always revalidate it before re-using it. 2. If the response includes the "must-revalidate" cache-control directive, the cache MAY use that response in replying to a subsequent request. But if the response is stale, all caches MUST first revalidate it with the origin server, using the request-headers from the new request to allow the origin server to authenticate the new request. 3. If the response includes the "public" cache-control directive, it MAY be returned in reply to any subsequent request.

Si la réponse est 401 (« Unauthorized ») la réponse doit inclure champ entête ‘WWW-Authenticate‘.


10.4.2 401 Unauthorized

The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. The client MAY repeat the request with a suitable Authorization header field (section 14.8). If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials. If the 401 response contains the same challenge as the prior response, and the user agent has already attempted authentication at least once, then the user SHOULD be presented the entity that was given in the response, since that entity might include relevant diagnostic information. HTTP access authentication is explained in "HTTP Authentication: Basic and Digest Access Authentication" [43].

Les principaux mécanismes d’authentification sont ‘HTTP basic, ‘HTTP Digest’ (http://tools.ietf.org/html/rfc2617, HTTP Authentication: Basic and Digest Access Authentication) et ‘WSSE’ qui sont très peu sécurisés. Voir chapitre 8 pour plus de détails.

Les auteurs parlent à nouveau de XHTML5 (http://www.whatwg.org/specs/web-apps/current-work/, HTML 5 Working Draft — 28 June 2007) qui permettra (quand ?) d’utiliser ‘PUT’ et ‘DELETE’ dans les formulaires. Ils parlent du nouvel attribut ‘template’ (pas encore approuvé , voir http://tools.ietf.org/id/draft-gregorio-uritemplate-00.txt )

   A URI Template is a sequence of characters that contains one or more
   embedded template variables Section 4.1.  A URI Template becomes a
   URI when the template variables are substituted with the template
   variables string values, see Section 4.2.  The following shows an
   example URI Template:

   http://example.com/widgets/widget_id

   If the value of the widget_id variable is "xyzzy", the resulting URI
   after substitution is:

   http://example.com/widgets/xyzzy

Chapitre 7: « A service implementation »

Les auteurs rappellent que jusqu’à présent les frameworks Web se sont focalisés sur les applications web pour les butineurs Web (web browser) et qu’ils n’utilisaient donc que les 2 seuls méthodes possibles GET et ‘POST’ (voir http://cafe.elharo.com/web/why-rest-failed/ et http://www.w3.org/TR/html401/interact/forms.html#adef-method).

De nouveaux frameworks pour développer des services RESTful sont apparus: Django (Python), Restlet (Java) et Ruby On Rails. (ces frameworks seront détaillés au chapitre 12)

Le but de ce chapitre est d’écrire une application de type ‘social bookmarking’ RESTful avec RubyOnRails. Alors allons-y !

pve@pc72 /cygdrive/e/projets
$ rails bookmarks
create
create app/controllers
create app/helpers
create app/models
create app/views/layouts
create config/environments
create components
create db
create doc
create lib
create lib/tasks
create log
create public/images
create public/javascripts
create public/stylesheets
create script/performance
create script/process
create test/fixtures
create test/functional
create test/integration
create test/mocks/developmen
create test/mocks/test
create test/unit
create vendor
create vendor/plugins
create tmp/sessions
create tmp/sockets
create tmp/cache
create tmp/pids
create Rakefile
create README
create app/controllers/appli
create app/helpers/applicati
create test/test_helper.rb
create config/database.yml
create config/routes.rb
create public/.htaccess
create config/boot.rb
create config/environment.rb
create config/environments/p
create config/environments/d
create config/environments/t
create script/about
create script/breakpointer
create script/console
create script/destroy
create script/generate
create script/performance/be
create script/performance/pr
create script/process/reaper
create script/process/spawne
create script/process/inspec
create script/runner
create script/server
create script/plugin
create public/dispatch.rb
create public/dispatch.cgi
create public/dispatch.fcgi
create public/404.html
create public/500.html
create public/index.html
create public/favicon.ico
create public/robots.txt
create public/images/rails.p
create public/javascripts/pr
create public/javascripts/ef
create public/javascripts/dr
create public/javascripts/co
create public/javascripts/ap
create doc/README_FOR_APP
create log/server.log
create log/production.log
create log/development.log
create log/test.log

pve@pc72 /cygdrive/e/projets

$ script/plugin discover
Add http://www.agilewebdevelopment.com/plugins/? [Y/n] Y
Add svn://rubyforge.org/var/svn/expressica/plugins/? [Y/n] Y
Add http://soen.ca/svn/projects/rails/plugins/? [Y/n] Y
Add http://technoweenie.stikipad.com/plugins/? [Y/n] Y
Add http://svn.techno-weenie.net/projects/plugins/? [Y/n] Y
Add http://svn.recentrambles.com/plugins/? [Y/n] Y
Add http://opensvn.csie.org/rails_file_column/plugins/? [Y/n] Y
Add http://svn.protocool.com/rails/plugins/? [Y/n] Y
Add http://tools.assembla.com/svn/breakout/breakout/vendor/plugins/? [Y/
Add http://svn.pragprog.com/Public/plugins/? [Y/n] Y
Add http://source.collectiveidea.com/public/rails/plugins/? [Y/n] Y
Add https://secure.near-time.com/svn/plugins/? [Y/n] Y
Add http://svn.inlet-media.de/svn/rails_extensions/plugins/? [Y/n] Y
Add http://svn.viney.net.nz/things/rails/plugins/? [Y/n] Y
Add http://svn.hasmanythrough.com/public/plugins/? [Y/n] Y
Add http://svn.shiftnetwork.com/plugins/? [Y/n] Y
Add svn://caboo.se/plugins/? [Y/n] Y
Add http://svn.6brand.com/projects/plugins/? [Y/n] Y
Add http://shanesbrain.net/svn/rails/plugins/? [Y/n] Y
Add svn://errtheblog.com/svn/plugins/? [Y/n] Y
Add http://svn.nkryptic.com/plugins/? [Y/n] Y
Add http://svn.thoughtbot.com/plugins/? [Y/n] Y
Add http://svn.webwideconsulting.com/plugins/? [Y/n] Y
Add http://invisible.ch/svn/projects/plugins/? [Y/n] Y
Add svn://rubyforge.org/var/svn/enum-column/plugins/? [Y/n] Y
Add http://streamlinedframework.org:8079/streamlined/plugins/? [Y/n] Y
Add svn://dvisionfactory.com/rails/plugins/? [Y/n] Y
Add http://hivelogic.com/plugins/? [Y/n] Y
Add http://mattmccray.com/svn/rails/plugins/? [Y/n] Y
Add svn://rubyforge.org/var/svn/cartographer/plugins/? [Y/n] Y
Add http://www.svn.recentrambles.com/plugins/? [Y/n] Y
Add http://tanjero.com/svn/plugins/? [Y/n] Y
Add http://filetofsole.org/svn/public/projects/rails/plugins/? [Y/n] Y
Add http://topfunky.net/svn/plugins/? [Y/n] Y
Add http://svn.joshpeek.com/projects/plugins/? [Y/n] Y
Add svn://rubyforge.org/var/svn/agtools/plugins/? [Y/n] Y
Add http://svn.aviditybytes.com/rails/plugins/? [Y/n] Y
Add http://beautifulpixel.textdriven.com/svn/plugins/? [Y/n] Y
Add http://mabs29.googlecode.com/svn/trunk/plugins/? [Y/n] Y
Add http://www.codyfauser.com/svn/projects/plugins/? [Y/n] Y
Add http://craz8.com/svn/trunk/plugins/? [Y/n] Y
Add http://sean.treadway.info/svn/plugins/? [Y/n] Y
Add http://svn.thebootstrapnation.com/public/plugins/? [Y/n] Y
Add http://www.mattmccray.com/svn/rails/plugins/? [Y/n] Y
Add svn://rubyforge.org//var/svn/validaterequest/plugins/? [Y/n] Y
Add http://sprocket.slackworks.com/svn/rails/plugins/? [Y/n] Y
Add http://svn.simpltry.com/plugins/? [Y/n] Y
Add http://svn.elctech.com/svn/public/plugins/? [Y/n] Y
Add http://xmlblog.stikipad.com/plugins/? [Y/n] Y
Add http://www.xml-blog.com/svn/plugins/? [Y/n] Y
Add http://svn.toolbocks.com/plugins/? [Y/n] Y
Add http://thar.be/svn/projects/plugins/? [Y/n] Y
Add http://code.teytek.com/rails/plugins/? [Y/n] Y
Add http://www.infused.org/svn/plugins/? [Y/n] Y
Add svn://rubyforge.org/var/svn/apptrain/trunk/vendor/plugins/? [Y/n] Y
Add http://s3cachestore.googlecode.com/svn/trunk/plugins/? [Y/n] Y
Add http://sbecker.net/shared/plugins/? [Y/n] Y
Add http://opensvn.csie.org/macaque/plugins/? [Y/n] Y
Add http://svn.designbyfront.com/rails/plugins/? [Y/n] Y
Add http://svn.rails-engines.org/plugins/? [Y/n] Y
Add http://dev.fiatdev.com/svn/plugins/? [Y/n] Y
Add http://john.guen.in/svn/plugins/? [Y/n] Y
Add http://www.redhillonrails.org/svn/trunk/vendor/plugins/? [Y/n] Y
Add svn://rubyforge.org/var/svn/actsdisjoint/plugins/? [Y/n] Y
Add http://ajaxmessaging.googlecode.com/svn/trunk/plugins/? [Y/n] Y
Add http://mod-i18n.googlecode.com/svn/trunk/plugins/? [Y/n] Y

$ script/plugin install acts_as_taggable
svn: Can’t connect to host ‘rubyforge.org’: Connect
svn: Can’t connect to host ‘caboo.se’: Connection t
svn: Can’t connect to host ‘errtheblog.com’: Connec
svn: Can’t connect to host ‘rubyforge.org’: Connect
svn: Can’t connect to host ‘dvisionfactory.com’: Co
svn: Can’t connect to host ‘rubyforge.org’: Connect
+ ./acts_as_taggable/MIT-LICENSE
+ ./acts_as_taggable/README
+ ./acts_as_taggable/generators/acts_as_taggable_ta
generator.rb
+ ./acts_as_taggable/generators/acts_as_taggable_ta
+ ./acts_as_taggable/init.rb
+ ./acts_as_taggable/lib/acts_as_taggable.rb
+ ./acts_as_taggable/lib/tag.rb
+ ./acts_as_taggable/lib/tagging.rb
+ ./acts_as_taggable/tasks/acts_as_taggable_tasks.r
+ ./acts_as_taggable/test/acts_as_taggable_test.rb

$ script/plugin install http_authentication
svn: Can’t connect to host ‘rubyforge.org’:
svn: Can’t connect to host ‘caboo.se’: Conn
svn: Can’t connect to host ‘errtheblog.com’
svn: Can’t connect to host ‘rubyforge.org’:
svn: Can’t connect to host ‘dvisionfactory.
svn: Can’t connect to host ‘rubyforge.org’:
svn: Can’t connect to host ‘rubyforge.org’:
svn: Can’t connect to host ‘rubyforge.org’:
svn: Can’t connect to host ‘rubyforge.org’:
svn: Can’t connect to host ‘rubyforge.org’:
Plugin not found: [« http_authentication »]

Chapitre 8: « REST and ROA Best Practices »

Une récapitulation des bonnes pratiques.

Chapitre 9: « The building Blocks of Services »

Pour rappeler que les services Web sont basés sur 3 technologies fondamentales: HTTP, URIs et XML.

Les auteurs nous parlent des formats de représentation tels que XHTML (mime-type: application/xhtml+xml), les microformats, Atom, OpenSearch, SVG, form-encode Key-Value Pairs (application/x-www-form-urlencode), JSON, RDF, RDFa.

On nous parle de l’encodage de caractères . Aux Etats Unis, on utilise UTF-8, US-ASCII ou Windows 1252. En Europe on utilise l’ISO-8859-1. Au Japon on utilise EUC-JP, Shift-JS ou UTF-8.

L’utilisation de la norme Unicode permet de mettre de l’ordre dans l’utilisation de tous ces différents codages en utilisant 2 formes de transformation universelle que sont UTF-8 et UTF-16 (pour les langues asiatiques).

Il existe un excellent détecteur d’encode universal écrit en python (http://chardet.feedparser.org)

 

>>> import urllib
>>> urlread = lambda url: urllib.urlopen(url).read()
>>> import chardet
>>> chardet.detect(urlread("http://google.cn/"))
{'encoding': 'GB2312', 'confidence': 0.99}

>>> chardet.detect(urlread("http://yahoo.co.jp/"))
{'encoding': 'EUC-JP', 'confidence': 0.99}

>>> chardet.detect(urlread("http://amazon.co.jp/"))
{'encoding': 'SHIFT_JIS', 'confidence': 1}

>>> chardet.detect(urlread("http://pravda.ru/"))
{'encoding': 'windows-1251', 'confidence': 0.9355}

>>> chardet.detect(urlread("http://auction.co.kr/"))
{'encoding': 'EUC-KR', 'confidence': 0.99}

>>> chardet.detect(urlread("http://haaretz.co.il/"))
{'encoding': 'windows-1255', 'confidence': 0.99}

>>> chardet.detect(urlread("http://www.nectec.or.th/tindex.html"))
{'encoding': 'TIS-620', 'confidence': 0.7675}

>>> chardet.detect(urlread("http://feedparser.org/docs/"))
{'encoding': 'utf-8', 'confidence': 0.99}


Copyright © 2006 Mark Pilgrim · mark@diveintomark.org · Terms of use

XML permet de dire au client quel encodage on a utilisé en le spécifiant sur la première ligne du fichier XML:

<?xml version= »1.0″ encoding= »UTF-8″?>

HTTP avec son attribut d’entête Content-Type peut indiquer également l’encodage utilisé et il est préférable que cela soit le même que le document XML transmis. Si il est différent c’est le codage défini pat Content-Type qui prime. Ceci est un piège !


14.17 Content-Type

The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET. Content-Type = "Content-Type" ":" media-type Media types are defined in section 3.7. An example of the field is Content-Type: text/html; charset=ISO-8859-4 Further discussion of methods for identifying the media type of an entity is provided in section 7.2.1.

Pour ces problèmes voir http://tools.ietf.org/html/rfc3023 (XML Media Types) et l’article de Mark Pilgrim « XML on the Web Has Failed » http://www.xml.com/pub/a/2004/07/21/dive.html

Description des protocoles:

APP , http://tools.ietf.org/wg/atompub/ .

– GData qui est une extension de APP (http://code.google.com/apis/gdata/clientlibs.html). Les applications Blogger, Google Calendar, Google code search et Google spreadsheet exposent toutes des services web RESTful avec la même interface : le Protocole de Publication Atom (APP) avec les extensions GData.

Les auteurs parlent ensuite de ‘POE’ (Post Once Exactly, http://www.mnot.net/drafts/draft-nottingham-http-poe-00.txt ), des URI templates, de XHTML4 et de ses nombreuses limitations.

Pour XHTML5 le problème est sa date d’adoption: de fin 2008 à 2022 pour les plus réalistes :(. Voir http://blog.welldesignedurls.org)

Je passe sur WADL.

Chapitre 10 : The Resource-Oriented Architecture Versus Big Web Services

Bon ce sont toutes les technologies SOAP, WSDL WS-: je passe.

Chapitre 11: Ajax applications as REST clients

Pour les auteurs une application AJAX est un service web client qui tourne à l’intérieur d’un butineur (web browser)

Gmail est un service web et il existe une bibliothèque d’accès à ce service: http://libgmail.sourceforge.net (http://libgmail.sourceforge.net/, « Python binding for Google’s Gmail service »).

AJAX est devenu Ajax car pour les auteurs Ajax est un style d’architecture qui n’a pas nécessairement besoin ni de Javascript(on peut utiliserActionScript, Java, VBScript, python) ni de XML (on peut utiliser JSON, du HTML, du texte).

Presque tous les butineurs fournissent un objet javascript XMLHttpRequest avec les 5 méthodes HTTP de base: GET, HEAD, POST, PUT et DELETE avec la possibilité de modifier l’entête et le corps d’une requête HTTP. Un site pour tester son navigateur: http://mnot.net/javascript/xmlhttprequest/

Etant donné les différences d’implémentation de Javascript il est fortement conseillé d’employer des bibliothèques.

Bibliothèques utilisées pour Javascript:

Prototype : Attention: on ne peut pas modifier les champs d’entête !

Dojo

Pour plus d’informations sur les biblothèques Javascript voir http://en.wikipedia.org/wiki/Category:JavaScript_libraries . La bibliothèque Javascript qui monte est la bibliothèque jQuery voir http://docs.jquery.com/Sites_Using_jQuery

Je passe sur JoD (Javascript on Demand)

Chapitre 12 : Frameworks for RESTful Services

Dans ce dernier chapitre, les auteurs passent en revue 3 frameworks REStful: Ruby On rails, Restlet (Java) et Django (Python).

Ruby On Rails:

doit son succès au fait de respecter des conventions. La version 1.2 a une conception RESTful

Restlet

Restlet a été influencé par les technologies majeures de Java: l’API Servlet, les Java Server Pages , HTTPUrlConnection et Struts .

Voir Retrotranslator (http://retrotranslator.sourceforge.net/#what)

Django

La conception de Django est similaire à celle de Rails bien qu’ils fassent moins de simplifications.

Installation:

# extraction des sources django
$ svn co http://code.djangoproject.com/svn/django/trunk/ ~/django_src

$ python -c « from distutils.sysconfig import get_python_lib; print get_python_lib() »
/usr/lib/python2.5/site-packages

$ ln -s /cygdrive/h/django_src/django /usr/lib/python2.5/site-packages/django

# on met django-admin.py dans le chemin PATH

$ ln -s ~/django_src/django/bin/django-admin.py /usr/local/bin

pve@pc72 ~/django_src
$ which django-admin.py
/usr/local/bin/django-admin.py

// Le lendemain ==> mise à jour de django

$ cd ~/django_src; svn update
U django/test/client.py
U django/contrib/auth/__init__.py
U tests/modeltests/test_client/fixtures/testdata.json
U tests/modeltests/test_client/models.py
Updated to revision 5678.

# installation du projet django: voir http://www.djangoproject.com/documentation/tutorial01/

cd /cygdrive/e/projets/

Writing your first Django app, part 1 (http://www.djangoproject.com/documentation/tutorial01/)

Création du projet pybookmarks

$ date; django-admin.py startproject pybookmarks; date
Fri Jul 13 09:03:02 2007
Fri Jul 13 09:03:04 2007

pve@pc72 /cygdrive/e/projets
$ ls -als pybookmarks/
total 48
0 drwxr-xr-x 2 pve mkgroup-l-d 0 Jul 13 09:03 .
0 drwxr-xr-x 6 pve mkgroup-l-d 0 May 10 11:19 ..
0 -rw-r–r– 1 pve mkgroup-l-d 0 Jul 13 09:03 __init__.py
16 -rwxr-xr-x 1 pve mkgroup-l-d 546 Jul 13 09:03 manage.py
16 -rw-r–r– 1 pve mkgroup-l-d 2933 Jul 13 09:03 settings.py
16 -rw-r–r– 1 pve mkgroup-l-d 235 Jul 13 09:03 urls.py

pve@pc72 /cygdrive/e/projets/pybookmarks
$ python manage.py runserver
Validating models…
0 errors found.

Django version 0.97-pre, using settings ‘pybookmarks.settings’
Development server is running at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

Projet django initial

voir:Pour la base de données j’ai dû choisir le site “ftp://sunsite.dk/projects/cygwinports” pour pouvoir installer la base de données sqlite3. Pour vérifier que la base sqlite3 est opérationnelle je fais:

$ python
Python 2.5.1 (r251:54863, Jun 19 2007, 22:55:07)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type « help », « copyright », « credits » or « license » for more information.
>>> import sqlite3
>>>

Modication du fichier /cygdrive/e/projets/pybookmarks/settings.py

modification du fichier applicatif setting.py

 pve@pc72 /cygdrive/e/projets/pybookmarks
$ python manage.py syncdb
Creating table auth_message
Creating table auth_group
Creating table auth_user
Creating table auth_permission
Creating table django_content_type
Creating table django_session
Creating table django_site

You just installed Django’s auth system, which means you don’t have any superuse
rs defined.
Would you like to create one now? (yes/no): yes
Username (Leave blank to use ‘pve’):
E-mail address: pvergain@gmail.com
Password:
Password (again):
Superuser created successfully.
Installing index for auth.Message model
Installing index for auth.Permission model
Loading ‘initial_data’ fixtures…
No fixtures found.

# Création de notre application
$ date; python manage.py startapp bookmarks; date
Fri Jul 13 13:03:58     2007
Fri Jul 13 13:04:00     2007

pve@pc72 /cygdrive/e/projets/pybookmarks
$ ls -als bookmarks/
total 32
0 drwxr-xr-x 2 pve mkgroup-l-d  0 Jul 13 13:04 .
0 drwxr-xr-x 3 pve mkgroup-l-d  0 Jul 13 09:03 ..
0 -rw-r–r– 1 pve mkgroup-l-d  0 Jul 13 13:04 __init__.py
16 -rw-r–r– 1 pve mkgroup-l-d 57 Jul 13 13:04 models.py
16 -rw-r–r– 1 pve mkgroup-l-d 26 Jul 13 13:04 views.py

Modification du modèle de données (models.py)


     Le modèle de données

ATTENTION: ne pas oublier le codage dans l’entête du fichier + la méthode __unicode__ non décrite dans le bouquin !

#!/usr/bin/python
# -*- coding: UTF-8 -*-

pve@pc72 /cygdrive/e/projets/pybookmarks
$ python manage.py sql bookmarks
BEGIN;
CREATE TABLE « bookmarks_bookmark » (
« id » integer NOT NULL PRIMARY KEY,
« user_id » integer NOT NULL,
« url » varchar(200) NOT NULL,
« short_description » varchar(255) NOT NULL,
« long_description » text NOT NULL,
« timestamp » datetime NOT NULL,
« public » bool NOT NULL
)
;
CREATE TABLE « bookmarks_tag » (
« id » integer NOT NULL PRIMARY KEY,
« name » varchar(100) NOT NULL
)
;
CREATE TABLE « bookmarks_bookmark_tags » (
« id » integer NOT NULL PRIMARY KEY,
« bookmark_id » integer NOT NULL REFERENCES « bookmarks_bookmark » (« id »),
« tag_id » integer NOT NULL REFERENCES « bookmarks_tag » (« id »),
UNIQUE (« bookmark_id », « tag_id »)
)
;
COMMIT;

If you’re interested, also run the following commands:
  • python manage.py validate pybookmarks — Checks for any errors in the construction of your models.
  • python manage.py sqlcustom pybookmarks — Outputs any custom SQL statements (such as table modifications or constraints) that are defined for the application.
  • python manage.py sqlclear pybookmarks — Outputs the necessary DROP TABLE statements for this app, according to which tables already exist in your database (if any).
  • python manage.py sqlindexes pybookmarks — Outputs the CREATE INDEX statements for this app.
  • python manage.py sqlall pybookmarks — A combination of all the SQL from the ‘sql’, ‘sqlcustom’, and ‘sqlindexes’ commands.

Looking at the output of those commands can help you understand what’s actually happening under the hood.

Looking at the output of those commands can help you understand what’s actually happening under the hood.

$ python manage.py syncdb
Creating table auth_message
Creating table auth_group
Creating table auth_user
Creating table auth_permission
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table bookmarks_bookmark
Creating table bookmarks_tag

You just installed Django’s auth system, which mean
rs defined.
Would you like to create one now? (yes/no): yes
Username (Leave blank to use ‘pve’):
E-mail address: pvergain@gmail.com
Password:
Password (again):
Superuser created successfully.
Installing index for auth.Message model
Installing index for auth.Permission model
Installing index for bookmarks.Bookmark model
Installing index for bookmarks.Tag model
Loading ‘initial_data’ fixtures…
No fixtures found.

Writing your first Django app, part 2 (http://www.djangoproject.com/documentation/tutorial02/)

Generating admin sites for your staff or clients to add, change and delete content is tedious work that doesn’t require much creativity. For that reason, Django entirely automates creation of admin interfaces for models.

Django was written in a newsroom environment, with a very clear separation between “content publishers” and the “public” site. Site managers use the system to add news stories, events, sports scores, etc., and that content is displayed on the public site. Django solves the problem of creating a unified interface for site administrators to edit content.

The admin isn’t necessarily intended to be used by site visitors; it’s for site managers.

Add "django.contrib.admin" to your INSTALLED_APPS setting.

INSTALLED_APPS = (
‘django.contrib.auth’,
‘django.contrib.contenttypes’,
‘django.contrib.sessions’,
‘django.contrib.sites’,
‘pybookmarks.bookmarks’,
‘django.contrib.admin’,
)

Run python manage.py syncdb. Since you have added a new application to INSTALLED_APPS, the database tables need to be updated.

$ python manage.py syncdb
Creating table django_admin_log
Installing index for admin.LogEntry model
Loading ‘initial_data’ fixtures…
No fixtures found.

Edit your mysite/urls.py file and uncomment the line below “Uncomment this for admin:”. This file is a URLconf; we’ll dig into URLconfs in the next tutorial. For now, all you need to know is that it maps URL roots to applications.

Recall from Tutorial 1 that you start the development server like so:

python manage.py runserver

Now, open a Web browser and go to “/admin/” on your local domain — e.g., http://127.0.0.1:8000/admin/. You should see the admin’s login screen:

Administration django

 

Just one thing to do: We need to specify in the bookmarks model that Bookmark objects have an admin interface. Edit the pybookmarks/bookmarks/models.py file and make the following change to add an inner Admin class:

The class Admin will contain all the settings that control how this model appears in the Django admin. All the settings are optional, however, so creating an empty class means “give this object an admin interface using all the default options.

Now reload the Django admin page to see your changes. Note that you don’t have to restart the development server — the server will auto-reload your project, so any modifications code will be seen immediately in your browser.

 Et ça marche !

 Admin bookmarks

Saisie d’un bookmark

Things to note here:

  • The form is automatically generated from the Poll model.
  • The different model field types (models.DateTimeField, models.CharField) correspond to the appropriate HTML input widget. Each type of field knows how to display itself in the Django admin.
  • Each DateTimeField gets free JavaScript shortcuts. Dates get a “Today” shortcut and calendar popup, and times get a “Now” shortcut and a convenient popup that lists commonly entered times.

Writing your first Django app, part 3 (http://www.djangoproject.com/documentation/tutorial03/)

Installation de dateutil: d’abord le télécharger, le décompresser, aller sous le répertoire python-dateutil et faire ‘python setup.py install’

$ python setup.py install
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/dateutil
copying dateutil/__init__.py -> build/lib/dateutil
copying dateutil/easter.py -> build/lib/dateutil
copying dateutil/parser.py -> build/lib/dateutil
copying dateutil/relativedelta.py -> build/lib/dateutil
copying dateutil/rrule.py -> build/lib/dateutil
copying dateutil/tz.py -> build/lib/dateutil
copying dateutil/tzwin.py -> build/lib/dateutil
creating build/lib/dateutil/zoneinfo
copying dateutil/zoneinfo/__init__.py -> build/lib/dateutil/zoneinfo
running install_lib
creating /usr/lib/python2.5/site-packages/dateutil
copying build/lib/dateutil/__init__.py -> /usr/lib/python2.5/site-packages/dateutil
copying build/lib/dateutil/easter.py ->/usr/lib/python2.5/site-packages/dateutil
copying build/lib/dateutil/parser.py -> /usr/lib/python2.5/site-packages/dateutil
copying build/lib/dateutil/relativedelta.py -> /usr/lib/python2.5/site-packages/dateutil
copying build/lib/dateutil/rrule.py -> /usr/lib/python2.5/site-packages/dateutil

copying build/lib/dateutil/tz.py -> /usr/lib/python2.5/site-packages/dateutil
copying build/lib/dateutil/tzwin.py -> /usr/lib/python2.5/site-packages/dateutil

creating /usr/lib/python2.5/site-packages/dateutil/zoneinfo
copying build/lib/dateutil/zoneinfo/__init__.py -> /usr/lib/python2.5/site-packages/dateutil/zoneinfo
byte-compiling /usr/lib/python2.5/site-packages/dateutil/__init__.py to __init__
.pyc
byte-compiling /usr/lib/python2.5/site-packages/dateutil/easter.py to easter.pyc
byte-compiling /usr/lib/python2.5/site-packages/dateutil/parser.py to parser.pyc
byte-compiling /usr/lib/python2.5/site-packages/dateutil/relativedelta.py to relativedelta.pyc
byte-compiling /usr/lib/python2.5/site-packages/dateutil/rrule.py to rrule.pyc
byte-compiling /usr/lib/python2.5/site-packages/dateutil/tz.py to tz.pyc
byte-compiling /usr/lib/python2.5/site-packages/dateutil/tzwin.py to tzwin.pyc
byte-compiling /usr/lib/python2.5/site-packages/dateutil/zoneinfo/__init__.py to
__init__.pyc
running install_data
copying dateutil/zoneinfo/zoneinfo-2007f.tar.gz -> /usr/lib/python2.5/site-packages/dateutil/zoneinfo
running install_egg_info
Writing /usr/lib/python2.5/site-packages/python_dateutil-1.2-py2.5.egg-info

# On vérifie que ça marche
$ python
Python 2.5.1 (r251:54863, Jun 19 2007, 22:55:07)
[GCC 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)] on cygwin
Type « help », « copyright », « credits » or « license » for more information.

>>> import dateutil.parser
>>>
La suite la semaine prochaine.

http://www.djangosites.org/latest/

http://www.djangobook.com/

http://www.djangoproject.com/

Posted in python, Architecture logicielle, ruby, Web applications, REST, http, XML, XHTML5, URI | Leave a Comment »

Oracle : quelques commandes intéressantes

Posted by Noam sur mars 20, 2007

Quelques commandes que j’utilise (avec sqldeveloper)  pour produire le code de mes classes métier:

– SELECT ‘public static readonly string ‘ || REPLACE(INITCAP(cname),’_’, ») || ‘ = « ‘ || cname || ‘ »;’ FROM col WHERE tname = ‘tablename’

– SELECT ‘, new CAttributTable ( ‘ || REPLACE(INITCAP(cname),’_’, ») || ‘ )’ FROM col WHERE tname = ‘tablename’

– SELECT ‘public string Value’ || REPLACE(INITCAP(cname),’_’, ») || ‘ { get { return CUtilConversion.ToString(DataValue,’ || REPLACE(INITCAP(cname),’_’, ») || ‘); }}’ FROM col WHERE tname = ‘tablename’

– SELECT ‘MyValue +=  » ‘ || REPLACE(INITCAP(cname),’_’, ») || ‘= » + CUtilConversion.ValueSQL(Value’ || REPLACE(INITCAP(cname),’_’, ») || ‘) + « \n »;’ FROM col WHERE tname = ‘tablename’

Posted in active record, Architecture logicielle, Oracle | Leave a Comment »

Java5, annotations

Posted by Noam sur mars 17, 2007

Java5

-GUICE: http://google-code-updates.blogspot.com/2007/03/guice-googles-internal-java-dependency.html

Google has been using a blazingly fast, innovative, Java 5-based dependency injection framework in mission critical applications for quite some time.

The project is lead by Bob Lee, and we are pleased to say that we have released it to the community as open source software.

Guice wholly embraces annotations and generics, thereby enabling you to wire together and test objects with less effort than ever before. Annotations finally free you from error-prone, refactoring-adverse string identifiers.

Guice injects constructors, fields and methods (any methods with any number of arguments, not just setters). Guice includes advanced features such as custom scopes, circular dependencies, static member injection, Spring integration, and AOP Alliance method interception, most of which you can ignore until you need it.

Guice already has a community around it, and already powers Struts 2’s plugin architecture.

We asked Bob Lee a few questions about the project:

Why was Guice created?

We created Guice hoping to write less code and break up a multi-million line app. We looked at existing solutions, but we saw a lot of doors opened by the new Java 5 features. When I started from scratch I followed a use case driven approach and built a real application. As I wrote I was constantly asking myself « how do I really want to be writing this? ».

I value pragmatism and followed Josh Bloch’s API design advice, especially, « when in doubt, leave it out. »

Finally, we strove most of all for maintainability. Can a new programmer sit down at a million line code base and maintain it? We think type safety is a huge factor here.

Who should use Guice?

Anyone writing Java code. We see Guice as a lighter weight alternative to the factory pattern.

More information:

Labels: , ,

Posted in design pattern, java | Leave a Comment »

Tools for Web testing, Web recording/playback, nose unittest extensions, and code coverage.

Posted by Noam sur mars 17, 2007

Sources

Other web testing resources:

Introduction

Author: Titus Brown

 

Contents

At PyCon ’07, I gave a talk on testing tools in which I « performed » nine live demos of features from twill, scotch, pinocchio, and figleaf. These are tools for Web testing, Web recording/playback, nose unittest extensions, and code coverage.

This is the source code for that talk.

The Python code should work on any UNIX-like machine; I gave my talk with my MacBook, running Python 2.3.

Note that the only demo that didn’t work during the talk was the first one, in which I used ‘subprocess’ to start a CherryPy server. Since this is contraindicated IMO (I suggest using wsgi_intercept; see Demo 2) I was more or less happy that it failed. (It failed because I made a last minute adjustment to the command that ran app.py, and didn’t test it before the talk… ;(

 

Files

You can get the entire source code (including this text) at

http://darcs.idyll.org/~t/projects/pycon-07-talk-source-latest.tar.gz

The latest version of this README should always be accessible at

http://darcs.idyll.org/~t/projects/pycon-07-talk-source/README.html

 

Warning

This was billed as an « intermediate » talk at PyCon, and I jumped right into code rather than giving a detailed introduction. This document follows that strategy — so go look at the code and read what I have to say afterwards!

 

Feedback

I’m interested in your comments. Please either drop me an e-mail directly or go comment on the associated blog post.

 

Requirements

You will need twill 0.9b1 (the latest available through easy_install), nose 0.9.1 (the latest available through easy_install), scotch (latest), figleaf (latest), and pinocchio (latest). You will also need to have CherryPy 3.x and Django 0.95 installed.

You may need to adjust your PYTHONPATH to get everything working. Check out env.sh to see what I put in my path before running everything.

 

Demo 1: Testing CherryPy

Purpose: show how to use twill to test a (very simple!) CherryPy site.

twill is a simple Web scripting language that lets you automate Web browsing (and Web testing!) Here I’ll show you how to use it to run a few simple tests on a simple CherryPy site.

The CherryPy application under test is in cherrypy/app.py. You can check it out for yourself by running python app.py; this will set up a server on http://localhost:8080/.

The URLs to test are: http://localhost:8080/, which contains a form; http://localhost:8080/form/, which processes the form submit; and http://localhost:8080/page2/, which is just a simple « static » page.

The twill test script for this app is in cherrypy/simple-test.twill. All it does is go to the main page, confirm that it loaded successfully and contains the words « Type something », fills in the form with the string « python is great », and submits it. The final command verifies that the output is as expected.

If you wanted to run all of this stuff manually, you would type the following (in UNIX):

python app.py &twill-sh -u http://localhost:8080/ simple-test.twill

So, how do you do it with twill and nose?

Take a look at the unit test source code in cherrypy/demo1/tests.py. This is a nose test that you can run by typing

nosetests -w demo1/

from within the cherrypy/ subdirectory.

Try running it.

You should see a single ‘.’, indicating success:

.----------------------------------------------------------------------Ran 1 test in 2.069sOK

Now try

nosetests -w demo1/ -v

You should see

tests.test ... ok----------------------------------------------------------------------Ran 1 test in 2.069sOK

So yes, it all works!

Briefly, what is happening here is that:

  1. tests.py is discovered and imported by nose (because it has the magic ‘test’ in its name); then setup(), test(), and teardown() are run (in that order) because they are names understood by nose.
  2. setup executes the application app.py, capturing its stdout and stderr into a file-like object (which is accessible as pipe.stdout). setup has to wait a second for app.py to bind the port, and then sets the URL of the Web server appropriately.
  3. test then runs the twill script via twill.execute_file, passing it the initial URL to go to.
  4. teardown calls a special URL, exit, on the Web app; this causes the app to shut down (by raising SystemExit). It then waits for the app to exit.

A few notes:

  1. setup and teardown are each run once, before and after any test functions. If you added in another test function — e.g. test2 — it would have access to url and pipe and an established Web server.

  2. Note that url is not a hardcoded string in the test; it’s available as a global variable. This lets any function in this module (and any module that can import tests) adjust to a new URL easily.

  3. Also note that url is not hardcoded into the twill script, for the same reason. In fact, because this twill script doesn’t alter anything on the server (mainly because the server is incredibly dumb 😉 you could imagine using this twill script as a lifebeat detection for the site, too, i.e. to check if the site is minimally alive and processing Web stuff properly.

  4. What if the Web server is already running, or something else is running on the port?

  5. More generally, what happens when the Popen call goes awry? How do you debug it?

    (Answer: you’ve got to figure out how to get ahold of the stdout/stderr and print it out to the environment, which can be a bit ugly.)

  6. What happens if /exit doesn’t work, in teardown?

    (Answer: the unit tests hang.)

Notes 4-6 are the reasons why you should think about using the wsgi_intercept module (discussed in Demo 2) to test your Web apps.

 

Demo 2: Testing CherryPy without exec’ing a process

Purpose: demonstrate the use of wsgi_intercept.

The use of subprocess in Demo 1 was a big ugly problem: once you shell out to a command, doing good error handling is difficult, and you’re at the mercy of the environment. But you needed to do this to run the Web server, right?

Well, yes and no. If your goal was to test the entire Web stack — from your OS socket recv, through the CherryPy Web server, down to your application — then you really need to do things this way.

But that’s silly. In general, your unit and functional tests should be testing your code, not CherryPy and your OS; the time for testing that everything works together is later, during your staging and end-to-end testing phase(s). Generally speaking, though, your OS and Web server are not going to be simple things to test and you’re better off worrying about them separately from your code. So let’s focus on your code.

Back to the basic question: how do you test your app? Well, there’s a nifty new Python standard for Web app/server interaction called WSGI. WSGI lets you establish a nicely wrapped application object that you can serve in a bunch of ways. Conveniently, twill understands how to talk directly to WSGI apps. This is easier to show than it is to explain: take a look at cherrypy/demo2/tests.py. The two critical lines are in setup(),

wsgi_app = cherrypy.tree.mount(app.HelloWorld(), '/')twill.add_wsgi_intercept('localhost', 80, lambda: wsgi_app)

The first line asks CherryPy to convert your application into a WSGI application object, wsgi_app. The second line tells twill to talk directly to wsgi_app whenever a twill function asks for localhost:80.

Does it work?

Well, you can try it easily enough:

nosetests -w demo2/ -v

and you should see

tests.test ... ok----------------------------------------------------------------------Ran 1 test in 0.827sOK

So, yes, it does work!

Note that the test itself is the same, so you can actually use the test script simple-test.twill to do tests however you want — you just need to change the test fixtures (the setup and teardown code).

Note also that it’s quite a bit faster than demo1, because it doesn’t need to wait for the server to start up.

And, finally, it’s much less error prone. There’s really no way for any other process to interfere with the one running the test, and no network port is bound; wsgi_intercept completely shunts the networking code through to the WSGI app.

(For those of you who unwisely use your own Web testing frameworks, wsgi_intercept is a generic library that acts at the level of httplib, and it can work with every Python Web testing library known to mankind, or at least to me. See the wsgi_intercept page for more information.)

 

Demo 3: Basic code coverage analysis with figleaf

Purpose: demonstrate simple code-coverage analysis with figleaf.

Let’s move on to something else — code coverage analysis!

The basic idea behind code coverage analysis is to figure out what lines of code are (and more importantly aren’t) being executed under test. This can help you figure out what portions of your code need to be tested (because they’re not being tested at all).

figleaf does this by hooking into the CPython interpreter and recording which lines of code are executed. Then you can use figleaf’s utilities to do things like output an HTML page showing which lines were and weren’t executed.

Again, it’s easier to show than it is to explain, so read on!

First, start the app with figleaf coverage:

figleaf app.py

Now, run the twill script (in other window):

twill-sh -u http://localhost:8080/ simple-test.twill

Then CTRL-C out of the app.py Web server, and run

figleaf2html

This will create a directory html/; open html/app.py.html in a Web browser. You should see a bunch of green lines (indicating that these lines of code were executed) and two red lines (the code for page2 and exit). There’s your basic coverage analysis!

Note that class and function definitions are executed on import, which is why def page2(self): is green; it’s just the contents of the functions themselves that aren’t executed.

If you open html/index.html you’ll see a general summary of code files executed by the Python command you ran.

 

Demo 4: More interesting code coverage analysis with nose and figleafsections

Purpose: demonstrate the figleafsections plugin that’s part of pinocchio.

The figleafsections plugin to pinocchio lets you do a slightly more sophisticated kind of code analysis. Suppose you want to know which of your tests runs what lines of code? (This could be of interest for several reasons, but for now let’s just say « it’s neat », OK?)

For this demo, I’ve constructed a new pair of unit tests: take a look at cherrypy/demo3/tests.py. The first test function (test()) is identical to Demo 2, but now there’s a new test function — test2(). All that this function does is exercise the page2 code in the CherryPy app.

Now run the following commands in the cherrypy/ directory:

rm .figleafnosetests -v --with-figleafsections -w demo3annotate-sections ./app.py

This runs the tests with a nose plugin that keeps track of which tests are executing what sections of app.py, and then annotates app.py with the results. The annotated file is app.py.sections; take a look at it!

When you look at app.py.sections you should see something like this:

-- all coverage --| tests.test2| | tests.test

| | |        | #! /usr/bin/env python

| import cherrypy

|

| class HelloWorld(object):

|     def index(self):

+   |         return """<form method='POST' action='/form'>

|                   Type something: <input type='text' name='inp'>

|                   <input type='submit'>

|                   </form>"""

|     index.exposed = True

|

|     def form(self, inp=None, **kw):

+   |         return "You typed: \"%s\"" % (inp,)

|     form.exposed = True

|

|     def page2(self):

+     |         return "This is page2."

|     page2.exposed = True

|

|     def exit(self):

|         raise SystemExit

|     exit.exposed = True

|

| if __name__ == '__main__':

|     cherrypy.quickstart(HelloWorld())

What this output shows is that tests.test executed the index() and form() functions, while tests.test2 executed the page2() function only — just as you know from having read cherrypy/demo3/tests.py. Neat, eh?

See my blog post on the subject for some more discussion of how this can be useful.

 

Demo 5: Writing a simple twill extension to do form « fuzz » testing

Purpose: show how easy it is to write twill extensions.

Since twill is written in Python, it’s very easy to extend with Python. All you need to do is write a Python module containing the function(s) that you want to use within twill, and then call extend_with <module>. From that point on, those functions will be accessible from within twill. (Note that extension functions need to take string arguments, because the twill mini-language only operates on strings.)

For example, take a look at cherrypy/demo4/randomform.py. This is a simple extension module that lets you fill in form text fields with random values; the function fuzzfill takes a form name, a min/max length for the values, and an optional alphabet from which to build the values. You can call it like this:

extend_with randomformfuzzfill <form> 5 15 [ <alphabet> ]

If you look at the randomform.py script, the only real trickiness in the script is where it uses the twill browser API to retrieve the form fields and fill them with text. Conveniently, this entire API is available to twill extension modules.

Let’s try running it! The twill script cherrypy/fuzz-test.twill is a simple script that takes the CherryPy HelloWorld application and fills in the main page form field with a random alphanumeric string. As in Demo 2, we can put this all together in a simple unit test framework; see cherrypy/demo4/tests.py for the actual code.

You can run the demo code in the usual way:

nosetests -w demo4/ -v

If you run it without output capture, you’ll even see the random text we inserted:

% nosetests -w demo4/ -v -stests.test ... 127.0.0.1 - - [15/Mar/2007:19:08:14] "GET / HTTP/1.1" 200 166 "" ""closing...

==> at http://localhost/

Imported extension module 'randomform'.

(at /Users/t/iorich-dev/talk-stuff/cherrypy/demo4/randomform.pyc)

127.0.0.1 - - [15/Mar/2007:19:08:14] "GET / HTTP/1.1

" 200 166 "" ""

closing...

==> at http://localhost/

fuzzfill: widget "inp" using value "0jX0vUXye0"

Note: submit is using submit button: name="None", value=""

127.0.0.1 - - [15/Mar/2007:19:08:14] "POST /form HTTP/1.1

" 200 23 "" ""

ok

closing...

You typed: "0jX0vUXye0"

[15/Mar/2007:19:08:14] ENGINE CherryPy shut down

----------------------------------------------------------------------

Ran 1 test in 0.617s

OK

(Look for the text after « You typed »…)

 

Demo 6: Django fixtures for twill/wsgi_intercept

Purpose: show how to use wsgi_intercept and twill to test a simple Django app.

OK, I’ve shown you how to write automated tests for CherryPy Web apps. Let’s try it out for Django, now!

Since I don’t actually know any Django, let’s just try the Django intro app, a simple poll site that lets users select choices in a poll. The admin interface is a bit tough to test with twill, because it uses JavaScript, but we can test the main poll site easily enough.

Take a look at django/demo/tests.py for some code.

The first function you should look at is actually the last function in the file: TestDjangoPollSite.test. This function goes to « /polls », clicks on the « pycon » choice in the poll, submits it, and verifies that « pycon » has received 1 vote. (Unlike the CherryPy demos, here we’re using the twill Python API, rather than the scripting language.)

Behind this fairly simple looking test function lies two layers of fixtures.

The TestDjangoPollSite.setup() function is run before the test() function, and it serves to reset the vote count in the database; it’s very much like a unittest fixture, in that it’s run prior to each test* function in TestDjangoPollSite. (If there were a teardown() function in the class, it would be run after each test* function.)

The tests.setup() and tests.teardown() serve the same purpose as their CherryPy analogs in Demo 2: setup() initializes Django and sets up the wsgi_intercept shunt mechanism so that twill can talk to the Django app directly through WSGI. In turn, teardown cleans up the WSGI shunt.

Demos 1/2 and Demo 6 collectively demonstrate (hah!) how easy it is to use twill to start testing your Django and CherryPy apps. Even the simple level of testing demonstrated here serves an important purpose: you can be sure that, at a minimum, your application is configured properly and handling basic HTTP traffic. (More complicated tests will depend on your application, of course.)

(Thanks to Mick for his post — I swiped his code!)

 

Demo 7: Recording and examining a Django session with scotch

Purpose: show how to use scotch to record a Django test.

(For this demo, you’re going to need an extra shell window, e.g. an xterm or another ssh session.)

Make sure you have scotch installed, and then run run-recording-proxy. This sets up an HTTP proxy server on port 8000 that records traffic into memory (and saves into a file when done). You should see

** scotch proxy server running on 127.0.0.1 port 8000 ...** RECORDING to filename 'recording.pickle'

OK, now, in another shell, go into django/mysite/ and run python manage.py runserver localhost:8001. This runs the simple Django polling application on port 8001. You should see

Validating models...0 errors found.Django version 0.95.1, using settings 'mysite.settings'Development server is running at http://localhost:8001/

Quit the server with CONTROL-C.

Now go to your Web browser and open the URL http://localhost:8001/polls/. You should see a page with a link containing the link text « what’s up? » This tells you that the Django app is running.

Set your Web browser’s HTTP proxy to ‘localhost’, ‘8000’. Make sure that your proxy settings forward ‘localhost’ (by default, Firefox does not send localhost requests through the proxy mechanism).

All right, now hit reload! If everything is working right, you should see the same « polls » page, but this time you’ll be going through the scotch proxy server. Check out the window in which you ran scotch — it should say something like

REQUEST ==> http://localhost:8001/polls/++ RESPONSE: 200 OK++ (77 bytes of content returned)++ (response is text/html; charset=utf-8)

(# 1)

If so, great! It’s all working! (If not, well… one of us did something wrong ;).

OK, now go back to your Web browser and click through the poll (select « what’s up? », and choose « pycon », and then hit « submit »).

You should see a bunch more output on the proxy screen, including something like this (after the form submit):

REQUEST ==> http://localhost:8001/polls/1/vote/(post form)choice:  "3"++ RESPONSE: 302 FOUND

++ (0 bytes of content returned)

++ (response is text/html; charset=utf-8)

(# 4)

REQUEST ==> http://localhost:8001/polls/1/results/

Already you can see that this is moderately useful for « watching » HTTP sessions, right? (It gets better!)

OK, now hit CTRL-C in the proxy server shell, to cancel. It should say something like « saved 5 records! » These records are saved into the file recording.pickle by default, and you can look at some of the files in the scotch distribution (especially those under the bin/ directory) for some simple ideas of what to do with them.

All right, so you’ve seen that you can record HTTP traffic. But what can you do with the recording?

 

Demo 8: Convert the Django session into a twill script

Purpose: use scotch to generate a twill script from the recording in Demo 7.

Well, one immediately useful thing you can do with the recording is generate a twill script from it! To do that, type

translate-recording-to-twill recording.pickle

in the proxy window. You should get the following output:

# record 0go http://localhost:8001/polls/# record 2#   referer = http://localhost:8001/polls/

go http://localhost:8001/polls/1/

# record 3

#   referer = http://localhost:8001/polls/1/

fv 1 choice '3'

submit

Don’t be shy — save this to a file and run it with twill-sh! It should work.

So that’s pretty convenient, right? It’s not a cure-all — generating tests from recording can get pretty ugly, and with scotch I don’t aim to provide a complete solution, but I do aim to provide you with something you can extend yourself. (There are lots of site-specific issues that make it likely that you’ll need to provide custom translation scripts that understand your URL structure — these aren’t terribly hard to write, but they are site specific.)

 

Demo 9: Replaying the Django session from the recording

Purpose: use scotch to play back the Web traffic directly and compare.

OK, and now for the last demo: the ultimate regression test!

Leave the Django site running (or start it up again) and, in the proxy window, type play-recorded-proxy recording.pickle. This literally replays the recorded session directly to the Django Web app and compares the actual output with the expected output.

You should see something like this:

==> http://localhost:8001/polls/ ...... 200 OK (identical response)==> http://localhost:8001/polls/1 ...

... 301 MOVED PERMANENTLY (identical response)

==> http://localhost:8001/polls/1/ ...

... 200 OK (identical response)

==> http://localhost:8001/polls/1/vote/ ...

... 302 FOUND (identical response)

==> http://localhost:8001/polls/1/results/ ...

... 200 OK

++ OUTPUT DIFFERS

OLD OUTPUT:

====

<h1>what's up?</h1><ul>

<li>the blue sky -- 0 votes</li>

<li>not much -- 0 votes</li>

<li>pycon -- 2 votes</li>

</ul>

====

NEW OUTPUT:

====

<h1>what's up?</h1>

<ul>

<li>the blue sky -- 0 votes</li>

<li>not much -- 0 votes</li>

<li>pycon -- 5 votes</li>

</ul>

====

What’s happening is clear: because we’re not resetting the database to a clean state, the vote counts are being incremented each time we run the recording — after all, in each recording we’re pushing the « submit » button after selecting « pycon ».

Anyway, this is a kind of neat regression test: does your Web site still return the same values it should? Note that it’s very fragile, of course: if your pages have date/time stamps, or other content that changes dynamically due to external conditions, you’re going to have write custom filter routines that ignore that in the comparisons. But it’s at least a neat concept.

(Again, I should note that this is neat, but it’s not clear to me how useful it is. scotch is very much a programmer’s toolkit at the moment, and I’m still feeling my way through its uses. I do have some other ideas that I will reveal by next year’s PyCon…)

 

Conclusions

I hope you enjoyed this romp through a bunch of different testing packages. I find them useful and interesting, and I hope you do, too.

Note that this stuff is my hobby, not my job, and so I tend to develop in response to other people’s requests and neat ideas. Send me suggestions!

Posted in python, Architecture logicielle, tests, Web Frameworks | 1 Comment »

Critique de l’architecture ASP.NET

Posted by Noam sur mars 13, 2007

J’ai mis en évidence quelques points d’une étude réalisée par Guillaume Saint-Etienne (document original: http://docs.google.com/View?docid=dhp3ggmx_24p2k9h7)

Le document modifié: http://docs.google.com/Doc?id=dcc3chdz_3fbp8z6

ASP.Net fait-il du Model View Controler (nativement)?

La réponse est définitivement NON

http://codebetter.com/blogs/jeremy.miller/archive/2006/02/01/137457.aspx

Même si Microsoft a tenté d’affirmer le contraire pour céder à la mode des MVC et montrer que ASP.Net est super bien foutu.

Le problème avec ASP. Net est qu’il n’y a qu’un objet qui traite les demandes http, c’est l’objet PAGE.

C’est lui qui a le contrôle de tout, et donc il mélange le code dit de «contrôle», et le code qui pilote la «visualisation» des éléments en html.

Et bien souvent, on mélange aussi le code qui pilote le «Modèle» c’est à dire l’obtention des données directement depuis la base de données avec Ado.Net (c’est ce qu’on obtient lorsqu’on fait du WYSIWYG dans Visual Studio en choisissant les Sql Data Source et les glissant-déposant sur l’ihm).

Cet anti-modèle (ou anti–pattern) a été souvent pointé du doigt par les architectes et développeurs, car en plus de faire produire du code spaghetti (bien que orienté objet), il rend impossible les tests systématisés (automatisés).

Egalement, les bugs sont plus durs à corriger et plus nombreux, car on n’isole pas assez les responsabilités dans les lignes de code. Tout est mélangé dans le code-behind.

Et au final, on peut se demander si l’on a fait beaucoup de progrès depuis Asp ( sans .Net) ?

Bref, le modèle MVC apporte de réelles améliorations et il est préférable de le suivre dès lors qu’on écrit un site de plus d’une dizaine de pages et surtout un site où on aura beaucoup de feedback de la part des utilisateurs et beaucoup d’améliorations successives à apporter.

Autant dire que c’est indispensable quand on a un projet à la méthode Agile (type MSF, Scrum, Xtreme Programing …).

Il faut donc, pour faire du MVC avec ASP.Net écrire du code supplémentaire soi-même, ce qui n’est pas un travail négligeable.

Ou bien si l’on est un peu plus malin, utiliser un Framework qui le fait à notre place.

C’est le rôle du projet Monorail : http://www.castleproject.org/monorail/index.html

L’idée est de se dire : MVC c’est bien. Je veux en faire (pour tous les avantages cités, notamment la ‘testabilité’). Mais je veux choisir quel système va s’occuper de la Vue (c’est à dire le RENDU au format HTML ou un autre format similaire, tant qu’à faire).

En ASP.Net tel que nous le propose Microsoft , il y a peu d’alternatives aux WebForms pour choisir son moteur de rendu (sinon écrire des composants qui enchainent les ‘ Response.Write’ mais je ne connais personne de sérieux qui se soit lancé dans cette voie là).

Il faut alors chercher sur le web pour trouver des projets indépendants, beaucoup étant inspirés de ce qui se fait dans la communauté Java.

Monorail utilise ASP.Net comme base de fonctionnement. Il est programmé en tant que filtre ISAPI, invoqué par le ASP.Net Worker Process (apnet_wp.exe).

Monorail ressemble fortement à Rails en Java ( et Ruby on Rails) qui a une forte popularité chez les développeurs

Cela se voit en jetant un coup d’œil à leur API respectives :

http://api.rubyonrails.org/

http://www.castleproject.org/monorail/documentation/v1rc2/manual/mrtypedocs/index.html

Monorail propose d’utiliser ActiveRecords qui se base sur Nhibernate. Nhibernate se base quant à lui sur Ado.Net

 

On est loin d’Ado.Net mais toutes ces surcouches sont tout autant de lignes de codes que vous n’avez pas à écrire.

Car au final c’est très simple à utiliser et tout automatique.

Mais on peut tout à fait utiliser tout autre technique d’accès aux données.

 

Et si on a un plus gros projet, et/ou que l’on veut coller à une architecture solide, on aura tantôt fait d’utiliser comme vue un objet de la couche Applicative (Application Layer ou Service Layer comme décrite par Martin Fowler et al.)

Cf mon billet à ce sujet : http://www.dotnetguru2.org/gse/index.php?p=530&more=1&c=1&tb=1&pb=1

Cette couche permet un grand découplage et surtout une séparation de la responsabilité du code (Separation of Concern). Ces objets de la couche applicative sont spécialisés dans le traitement final et haut niveau (c’est à dire le plus prés de l’interface graphique et des actions de l’utilisateur).

Leur fonctionnement est quasi-procédural et n’exposent que des méthodes.

Par contre ces méthodes travaillent (en entrée et en sorties) avec les objets entités (ou data objects).

Il y a une isolation entre les 2. Dans ce type d’architecture les objets entités n’exposent que des propriétés.

Donc dans un modèle MVC/MVP le M de Modèle pourra être un objet «métier» ou Service qui par ses méthodes offrent des capacités métiers (incluant l’inévitable CRUD mais présenté en d’autres termes, en des termes complètement adaptés aux utilisations de haut niveau, c.a.d. IHM et services).

Ce M fera forcément référence a des objets entités qui structurent l’information (comme le fait si bien les objets générés par tout outil de mapping Objet/Relationnel).

 

 

Posted in Architecture logicielle, ASP.NET | Leave a Comment »