International Working Conference, Hotel La Reserve, Bellevue/Geneva, Switzerland
THE USE OF INTERNET AND WORLD-WIDE WEB FOR TELEMATICS IN HEALTHCARE
Rapporteur'ís notes Thursday, September 7, 1995
(Dr. Yves Ligier - CIH - HUG, Geneva, Switzerland)
Chair: Professor Paolo ZANELLA, Director, European Bioinformatics Institute, Cambridge, UK
Co-Chair: Dr. Ron APPEL, Imaging Dept. Geneva University Hospital, Geneva, SWITZERLAND
U. AMALDI, A. BAIROCH, G. DE MOOR, J. LOCH, O.RATIB, P. SZOLOVITS
I believe we have to start by asking the following questions: what would you like to see as the next step in using the Web? What are your expectations for the Web and the main functions you urgently want to have on the Web? Can you already identify some major developments?
In order to improve the present situation there is a tremendous amount of work to be done. I believe it would be useful if we could start by suggesting what it is that we would like to see done as soon as possible.
First, I will speak from the viewpoint of the WWW Consortium (W3C), and then I will add a few of my own thoughts. The consortium is addressing a number of important issues. Security is a hot topic, and the subject of the first formal working group. The goal is to reach consensus on a set of standards that provide adequate functionality and that interoperate among all the WWW software providers. A second issue, for which a working group is also organised, concerns cooperative work. Additional topics that are also receiving consideration include: stable URLs, to allow documents to refer to others and not lose their connection if the target document moves to a different machine electronic micro-payment schemes to allow charging for access to be integrated into low-level Web mechanisms and downloadable executable code, such as Java. From my own viewpoint, there are two critical topics: (1) The current Web functionality gives us lots of nouns but only a single verb, "click", which means "get". If the Web is to serve broader functions, we will need to develop a richer language of actions, for example at least to update information. (2) An important architectural principle should be that anything doable by a user can be done by a program. Then, as we develop conventional ways of using the Web, we can abstract such usage to produce new routine functions. This leads to the interesting research question of how to "sink applications into the middlewar"e to make them available as building blocks for new applications.
I have different concerns. The first point is the following: as it has been mentioned earlier, it is relatively easy to create and set up a W3 server, but it is difficult to maintain it. The maintenance task is a time intensive and consuming task. If it is not maintained it rapidly degrades and dies. Then the quality of the server is the next issue. There are a lot of available servers, some of them are empty while others provide very useful information. There is no control of the quality of data provided by the servers and it is an important problem. How can you be sure this information is valid, that it is not wrong? Next is the problem of archives. Information is evolving rapidly, and it can be changed very easily on the Web, so that the data you are referring to, that was available some time ago, may not be available anymore. It is a problem of "persistence" of data. There is also a problem in the domain of journal publication. Some journals are now published on the Web and it is a revolution in publishing. We have to think about the best way to go about this. My last comment concerns the long term evolution of the net, of what is happening. The net is degrading, the performance is going down with new users. What happens when the many millions of new users of Windows 95 get connected to the net?
No doubt the bandwidth will increase. The question is if it will grow fast enough to cope with the growing net-user-community?
From a commercial standpoint, my concern is more oriented towards quality of product. Is it possible to deliver a real product with good and accurate data? The current W3 model must be considered as a driver to lead us to other new models that can sustain quality access and products.
G. de Moor
As I have said earlier, the services of the Commission have a list of new projects (62) developing new WWW tools under the 4th framework R & D programme. We need products to author or to edit data in an easy way (e.g. Symposia, Grif). Quality of data and validation of knowledge, as well as security, are other important issues. My major concern is the one of quality so we need a kind of authority and also a kind of registration of sources. But to be very concrete if I had to propose a project I would call it HealthSurf. The "surf" would be an acronym standing for an SGML based Universal Registration Formalism. More specifically in connection with healthcare informatics (and the electronic healthcare record) we still miss a universal interchange formalism. I consider that SGML and HTML could replace antiquated syntaxes. We need a far more flexible SGML based tagged system for electronic information exchange in medicine and healthcare and that is a project I would like to propose.
I see two problems in using the Web. The first one is security and it has also been addressed by different panelists. The second problem is billing : who pays for what? It is important now to organize the payment facility. There is a different problem for each type of use. For instance, it is obvious that the user has to pay for a service of the type of a lifelong personal record. Eventually, of course, the health service or the insurance company - according to the country - will pay only if the service can be proved to improve the overall health care and/or reduce the costs. The show case of hadrontheraphy is very different. Here we have to exclude the patients who will not clearly profit from the treatment in a few, hightech centres. In this case patients will be examined and many of them will be refused - who pays here? We have some tentative solutions in the paper handed out, but this is certainly a problem!
Question from the floor:
Cyberspace is the next challenge! Could we suppose that it will be supplied by just a few providers? Will the user have to choose one of the providers and will the accessible information be limited or restricted by this provider?
Internet is a free form of access, the user is free to surf through the net. But it is a social question not a technical one. We have to decide what we want.
You have access to a large amount of information on the Web, among them you can download numerous pieces of software, but you cannot rely on these elements to build a real system. Indeed, what happens if you have a problem is that you do not have anybody to call, there is no support, no update and no maintenance. The software you get is cheap, it is free, what can you do with it? When providing a real product you have to support it and maintain it. So, you need money to support it. On one hand you can get free software that can be considered and used more as a prototype and on the other hand you have a real product but you have to pay for it.
J. van Bemmel:
I got the message from all of you that you believe the World-Wide Web and Internet will not ultimately be the vehicle for healthcare. The Web is a driver of change, but in which direction does it lead? Another problem with the Web is that I can get access to a lot of remote data, I can download them, but can I trust them? Are they reliable? Concerning, for example, the images from a patient record, they are produced in one place and used in another where we know nothing about the source. There is no free ticket to reliable data and validated knowledge and we have to resolve this issue before we can start using the Web intensively. The systems should be able to talk to each other at a functional level. For example, data related to a patient (the patient record), should be able to follow the patient when they move from one location to another.
My point is summarised in the following question: what should happen, apart from the establishing of data security, confidentiality and privacy, so that systems can talk to each other effectively and that the data and knowledge are validated? That is the challenge!
We get excellent data from doctors provided they do their job correctly. So, we can trust these data. The Web works in this same conventional way. Data are not created by the Web, they are just transported by it, so it cannot be responsible for the quality of the data. Everybody can play with the Web, but in order to get a professional tool, leave it to professional users.
Question from the floor:
Where does Reuters plan to make money with the Web? In the "medicare" activity? There are two possible scenarios: One is collecting, selecting, selling, maybe validating, information. The other is to set-up a network and to operate it between people who want to exchange information or who want to sell to one another.
As professionals, we are here to help people to integrate the Internet technology into the healthcare domain.
We need to distinguish between two types of information available on the Web. First, research and academic information and, second, more commercial information such as patient records. The foundation should be concerned with research information and should help guarantee the quality of servers. The foundation could issue a label and give it to reviewed servers. A list of approved servers could be defined and reviewed periodically.
Question from the floor:
Facilities provided on the Web will become more and more sophisticated but there is relatively little training going on for healthcare professionals. How can the normal user fully benefit from the different systems?
The underlying interactive model for the Web browsers is simple and easy enough to use for everybody to go on the net and to click.We have to provide some limited but useful capabilities and we can profit from any PC with a simple user interface. And even if the Web provides facilities that not all users benefit from, it is not really a problem.
Yes! is is like "do we really explore all the capabilities of our car?"
There is a problem with the access to information. To access some piece of information you have to get through a more-or-less large set of pointers. But once you have found it, you just go directly to it without reviewing anymore the different servers on your original path. So, how can you be sure that the relevant information you desire is still this one? You cannot be sure that your source of information is the most up-to-date one. So we need some intelligent agents to find what we are looking for, otherwise we will continue to access the same sources of information, and if we do not take the extra time required to explore previous or new links we will limit ourselves too much.
There are some programming languages that have been designed to support exploration of the Web, such as Telescript. These support the creation of programmes which can move to various sites and explore information available at that site, and then spawn additional subprocesses that follow threads of investigation and eventually integrate what they have found and report back to the user. In addition to the great technical challenges of building such languages, there is the serious question of how users will know how to express their search strategies?
Question from the floor:
We heard many things about healthcare, but all the described systems are oriented towards health professionals, what is available for normal citizens in the field of health?
There are good public resources that are available to every citizen. Just to mention a few domains for which there is an available server : cancer, diabetes, Parkinsons disease, .... These servers are disease oriented, and I am pretty sure that you can find a server dedicated to your disease.
As a cardiologist and medical practitioner, I consider that the single verb click is enough to interact with the Web and work in an efficient way. Concerning the reliability of data, the Web should have a way to validate data the same way that it is validated on paper. All the quality controls already done for paper documents can also be done for electronic documents.
According to my experience in the use of patient records, since it involves text and clinical judgement, we have always had the habit at HUG to look, first of all, at who wrote that record. You can trust what has been written according to the name of an experienced physician who has endorsed it. What causes problems in the establishing of medical records is when it has been written in a fuzzy or rather journalistic manner. Even the best computer will be rendered useless in trying to sort something out of it.
|http://www.hon.ch/Conf/Info/session2_t.html||Last modified: Wed Mar 12 1997|