DSpace Collection: Attivo fino al 31 dicembre 2012Attivo fino al 31 dicembre 2012http://hdl.handle.net/2307/232015-04-28T06:50:42Z2015-04-28T06:50:42ZA multi-robot system : from design to intelligent cooperationDi Rocco, Mauriziohttp://hdl.handle.net/2307/39882015-03-12T09:39:19Z2011-03-25T23:00:00Z<Title>A multi-robot system : from design to intelligent cooperation</Title>
<Authors>Di Rocco, Maurizio</Authors>
<Issue Date>2011-03-26</Issue Date>
<Abstract>The proposed work aims at giving a contribution in the domain of multi robot systems.
In the last years these architectures, thanks to their flexibility and robustness with
respect to monolithic structures, have been deeply investigated in various applications.
Looking at commercial platforms, one of the main limitations relies on the hard-
ware configuration. In most of cases, the low-level control software is not open source,
compromising a detailed customization of the robot. This drawback can be partially
overcome by the usage of plug-in boards which ease the use of a particular sensor. How-
ever, whenever an application requires heterogeneous transducers, the management of
several boards increases the computational burden of the CPU, the energy consumption
and costs.
The first contribution of this thesis is the description of the prototyping of a multi
robot platform. The robots, based on a dual layer architecture, have been designed
to be low-cost and flexible in order to cope with several scenarios. The reduced di-
mensions allow the units to execute experiments in small environments, like the ones
typically available in a research laboratory, even interacting with commercial platforms
like sensor networks.
Afterward, several experimental research activities are covered.
In the field of motion control, two techniques for the encircling problem, i.e. the
task in which a set of robots rotates around a target, are presented. This toy problem is
very appealing to the scientific community because it deals with a series of technological
issues which involve communications, motion estimation and control design.
Also the data fusion, which is one of the most treated problem in robotics, has
been taken into account. One of the traditional approaches resides on the Bayesian
framework. This approach allows a straightforward combination of data gathered by
several sensors. In this dissertation, such technique has been used in environmentalmonitoring. In particular extensive experiments were performed using the aforemen-
tioned platforms in the domain of gas source localization: the task consists in the search
of a source releasing gas in an environment. Besides the Bayesian framework, other
techniques can be used. In particular a focus on the Transferable Belief Model (TBM)
is provided. Unlike the probabilistic formulation, the TBM is able to effectively repre-
sent the concept of ignorance and contradiction, concepts which, for example, belong
to the human kind of thinking. In this work a technique for the data fusion in a multi
agent context is proposed. The theoretical results have been validated by extensive
experiments concerning the cooperative topological map building.
Finally a lookup view about the integration between mobile and static sensor net-
works (SN) is provided. One of the main issues for a static network is to know about
the interdistances between nodes. In this work a localization algorithm, based on a
modified Extended Kalman Filter, is showed. As for the previous topics, experimental
results support the theoretical approach.
In the last chapter conclusions and possible improvements are discussed.</Abstract>2011-03-25T23:00:00ZComplex technological networks : structure, dynamics and algorithmsMeloni, Sandrohttp://hdl.handle.net/2307/39872015-03-12T09:36:35Z2011-03-25T23:00:00Z<Title>Complex technological networks : structure, dynamics and algorithms</Title>
<Authors>Meloni, Sandro</Authors>
<Issue Date>2011-03-26</Issue Date>
<Abstract>Networks are all around us; from the simplest forms of life to the complexity
of our brain. Also ourselves are part of many networks from the
social interaction that we engage or as a result of biochemical interactions
inside a single cell. They have demonstrated to be present not only in social
or biological contexts but also in technological systems. The major example
is the Internet, maybe the biggest technological network. Other examples
include highways, transportation systems, power generation and distribution.
Thus, the study of the characteristics of the networks is of primary
importance for the advance of sciences.
In the last decade a new way of thinking about networks arose. Thanks
to the availability of huge amounts of digital data, computational power
and the quickness in communications a different kind of networks has been
analyzed. Such networks are defined as Complex Networks.
The aim of this work is to analyze, model and control dynamical systems
through complex networks theory. We propose a series of dynamical
models on large graphs to represent complex and non linear dynamics. We
mainly focus on theoretical models that, at a high level of abstraction, are
representations of real world technological and social systems. We focus
four different types of dynamics on complex network ranging from social
interactions modelled with a game theory approach to epidemic spreading.
In the first part of the work we concentrate on the structure and robustness
of complex networks and its application to real world problems as
sensor networks design. We propose a distributed algorithm for the creation
of resilient sensor networks topologies against both random failures
and attacks. Numerical evidence show that generated topologies outperform
classical random geometric graph structure.
Next, we study the emergence of cooperative behavior in a mobile agents
environment. We consider a two dimensional plane in which agents can moverandomly and interact with neighbors in their visibility radius. We model
social interactions as an evolutionary version of the prisoner’s dilemma and
analyze the conditions under which cooperation is an evolutionary stable
strategy. Our analysis highlight that, although defection is favoured, there
exists a consistent range of parameters space in which cooperation becomes
advantageous. Such conditions are fulfilled when agents density is enough to
produce small clusters in which cooperation can grow and players velocity
is small enough to assure clusters stability for a reasonable amount of time.
In the second part we have face a couple of interesting problems on top of
communication networks: the study of fluctuations in mean flows in a traffic
network and an optimization technique for congestion control. Regarding
the first issue we present a model based on random walks theory that incorporates
most of the characteristics of real communication systems such as
network structure and fluctuations in external systems arrivals. Solving the
model a direct relation between flows fluctuations and three factors appears,
they are namely: the variations in the number of packets in the network,
the degree of the nodes and the length of the time window in which measurements
are performed. The second problem is faced introducing a simple
traffic model that incorporate some simple traffic control strategies. We
show that some simple adaptive strategies can considerably slow down the
onset of congestion, but modifying the nature of the transition between free
flow regime and the congested state from a smooth to an abrupt one. Then,
introducing an empathetic optimization strategy we obtain a delay on the
onset of congestion and a smooth transition between the two regimes.
On the last part of the work we address a class of spreading processes
on networks. Our study start with a simple analytical formulation to model
the spreading of a disease in a class of interaction rules. We propose an
alternative formulation to the classical heterogeneous mean field to study a
SIS model on scale free networks. The model can be integrated numerically
and we analytically derive its equivalence with the HMF by recovering the
epidemic threshold. In addiction to the HMF the proposed approach permits
to predict the single nodes infection probability and can be applied to an
entire class of interaction models ranging from contact process to a fully
reactive scenario like the HMF.
Then, we move on the modeling of a more complex scenario in which
interactions are described by traffic flows. In this model packets are seen
as quanta of interaction between individuals and as the way the disease can
spread in the population. The analytical treatment of the model via an
HMF approach show that the epidemic threshold strictly depends on the
traffic values of the system and, in case of a bounded delivery rate, it canassume a finite value also for very high traffic intensities.
We conclude our analysis with a fully developed metapopulation system
in which the impact of human responses to the epidemic spreading is considered.
In the first response individuals are allowed to cancel their journey
with a probability proportional to the fraction of infected individuals at the
destination. In the second we permit them to take a longer but less infected
path to the destination. Numerical analysis showed an unexpected result.
Although cancelling a journey can only slightly reduce the epidemic incidence
a longer but safer path can have a dramatic outcome bringing the
disease in places that otherwise will be untouched and raising consistently
the epidemic incidence.</Abstract>2011-03-25T23:00:00ZStochastic optimization for airport inventory managementCesaro, Annalisahttp://hdl.handle.net/2307/6562011-10-24T23:38:23Z2010-03-29T22:00:00Z<Title>Stochastic optimization for airport inventory management</Title>
<Authors>Cesaro, Annalisa</Authors>
<Issue Date>2010-03-30</Issue Date>
<Abstract>Effective supply chain management is currently recognized as a key determinant of competitiveness and success in manufacturing and services, because the
implementation of supply chain management has signiﬁcant impact on cost,
service and quality. Numerous strategies for achieving these targets have been
proposed.
The improvements in information technology coupled with the substantial reduction in the cost of processing, storing and analyzing data have made new
strategies more attractive. On such strategy allows movements of stock between locations at the same echelon level via lateral transshipment.
Despite the above technology improvements, the implementation of such transshipment strategy requires still great eﬃciency especially in real life problems,
because it suffers from computer memory limits and long computation times
when the number of warehouses gets large, or when the number of parallel items
to ba analyzed following an item approach gets large, too.In fact, a drawback
of the policy of interest is the state dependent nature of the re-forwardings in
the systems.
Therefore an effective tactical planning requires joint contribution from various disciplines in order to be implemented eﬃciently, such as engineering,
mathematics, economics and computer science. New solution methods have to
be explored in order to effectively implementing new management strategies.
This thesis uses operations research techniques in order to study a single echelon, one-for-one ordering policy with complete pooling, with a deterministic
rule for lateral transshipments.
Speciﬁcally we propose new evaluation and optimization methods thus handling real life problems within a reasonable amount of computation time. In
fact, we test all the proposed methods on the practical case study motivated
by the practical needs of an Italian logistics, supporting the activity of 38 civil
airports spread over the Italian territory. The company handles 17 warehouses
and manages the overall process of purchasing, holding, ensuring that the overall reliability of safety equipments is always within contractual limits. The aim
of the company is therefore to grant the prescribed quality of service at minimum cost.
The items to be managed in such a context are typically expensive ones and
with low demand, but we clearly recognize that there are many different types
of service parts and that they perform many different functions. Therefore, in
such a context also parts with a lower ratio between holding and transshipment
costs may be encountered and managed. Thus with all the uncertainties that
exist, a tactical plan should be created that will provide the ﬂexibility needed
to meet a wide range of scenarios, pointing the attention on the characteristics
of the majority of items. Common techniques models the management policy
with a Markov chain approach, thus evaluating such a policy given a spare
parts allocation. The optimal stock allocation problem is formulated as an
integer program with non linear objective function and non linear constraints.
Therefore total enumeration methods or approximation algorithms can be employed for optimally solve it.
Based on the needs summarized above, the following research objectives have
been achieved in this dissertation. We have focused on a single echelon one-forone ordering policy with complete pooling, with a deterministic rule for lateral
transshipments.
We have formalized mathematically the Spares Allocation Problem (SAP)
and have understood its mathematical structure for building an exact
algorithm for optimally allocating the spares. In fact, in literature to the
best of our knowledge no exact algorithm has been proposed for allocating
optimally the spares in a continuous review setting rather than a total
enumerative algorithm. By exploiting the above algorithm it is interesting
– Making insight in the SAP and underline which factors inﬂuence
inventories in such a context.
– Evaluating fast and accurate heuristics for SAP.
Efficient and accurate models for assessing the performance of a single
echelon replenishment policy have been proposed and evaluated especially
for large numbers of locations. A drawback of the policy of interest is the
state dependent nature of the re-forwardings in the systems, it has been
therefore interesting.
– understanding the properties of the Markov chain associated to the
chosen policy.
– exploring, despite its state dependent nature, the possibility of expressing the state probabilities of the associated Markov chain model
exactly in product form
– developing fast and accurate approximate models for evaluating the
performance and costs in the system, since computing the state
probabilities is not practical as the number of states in the Markov
chain increases.
The achievement of the ﬁrst objective clearly required a strong connection
with the resolution of the second objective. In fact, the development of an
exact algorithm for allocating the spares may require in contexts with a large
number of warehouses and high rates approximate models for assessing the
performance and evaluating the costs.
Speciﬁcally, n this thesis by using a suitable optimization model we have shown
that the Markov chain cannot be decomposed exactly in product form. In fact,
the best product form approximation returns a positive accuracy error, which
implies that an exact product form does not exist.
Hence, we have adapted four approximation techniques to our model and evaluate their performance in terms of computational effort, memory requirement
and error with respect to the exact value. Three techniques approximate state
probabilities with others that can be expressed in product form, so that the
Markov chain can be decomposed. Speciﬁcally, we adapt a method by Alfredsson and Verrijdt, the Equivalent Random Traﬃc (ERT) method and the
Interrupted Poisson Process (IPP) method. The last two techniques have been
proposed for exploring the inﬂuence of peakedness in approximation models
with respect to the accuracy of performance estimation due to the state dependent nature of the re-forwardings in the system.
The fourth technique is based on the multi-dimensional scaling down approach,
which studies an equivalent reduced Markov chain rather than decomposing the
original one. The scaling down method outperforms the decomposition techniques for small OA values (OA < 0.997), while the percentage error is similar
for larger OA values. Besides the better performance shown in ﬁgure, in our experiments the scaling down method provides OA values smaller than the exact
ones in more than 80% of the experiments while the decomposition methods
ﬁnd OA values always larger than the exact ones. The scaling down method
is therefore more conservative than the decomposition methods, and this is
an important feature when the method has to be used within an optimization
The formulation and solution of the Spares Allocation Problem (SAP) is one of
the main achievements of this thesis. The mathematical structure of the problem has been investigated to build an eﬃcient exact algorithm for optimally
allocating the spares. Two assumption on the cost structure of the problem
leads to prove properties of the cost function that in turns allow to design a new
efficient branch and bound procedure. The lower bound is obtained by solving
a reduced problem with convex objective function, solvable at optimally very
efficiently. A new fast heuristic algorithm is also developed to ﬁnd a feasible
allocation within small computation time.
Computational experiments demonstrate that the branch and bound technique
is able to optimally solve almost all tested instances within reasonable computation time. The heuristic algorithm ﬁnds quite good solutions within very
limited computation time, thus being a promising approach for ﬁnding feasible
solutions to difficult instances.
Moreover we have analyzed several cost structure scenarios and we have observed that the transshipment cost is often comparable with the holding cost
and therefore it cannot be neglected in the solution of the problem.</Abstract>2010-03-29T22:00:00ZProduction scheduling in pharmaceutical industryVenditti, Lucahttp://hdl.handle.net/2307/6552011-10-24T23:38:22Z2010-03-29T22:00:00Z<Title>Production scheduling in pharmaceutical industry</Title>
<Authors>Venditti, Luca</Authors>
<Issue Date>2010-03-30</Issue Date>
<Abstract>Production scheduling is the phase of production management that produces a detailed description of operations to be executed in a given period
of time, typically short. Manufacturing process in pharmaceutical industry
is characterized by an high complexity of processes. Moreover, compared to
other manufacturing processes, the pharmaceutical industry gives higher importance to on-time delivery over throughput maximization, due to the economical and legal implications of late deliveries and stock-outs at the final
customers. In this contest, it is evident that scheduling is a critical operation
and so, that an automated scheduling system is important, both to obtain good
scheduling solutions and to have a better control of the production process.
A first aim of this thesis is to give a demonstration of the improvements that
may derive from an automated scheduling. At the same time, the issue concerning the difficulty of solving practical scheduling problems is arisen. A real
case study of production scheduling, concerning the Packaging department in
a real pharmaceutical industry, is studied. A complex model and a tabu search
algorithm have been developed to solve the related very hard scheduling problem, modeled as a multi-purpose machine problem with setup and removal
times, release and due dates and additional resource availability constraints.
The algorithm is able to find good solutions within short computation time,
compared to the solutions found by human schedulers. This result conforms
that scheduling technology is mature to solve complex real problems; to this
aim, however, it is important to make use of detailed scheduling models.
Besides operations management in a single stage, another interesting issue in production management, and in general for supply chain management,
is the coordination between stages. This issue is particularly important in the
pharmaceutical supply chain, in which the legal and economical implications
of product stock-out requires the adoption of standards of product quality and
availability close to 100%. Availability of final products requires not only to
achieve excellence at each stage of the planning process, from strategic planning to real time scheduling and delivery, but also in the coordination between
different stages. A second aim of this thesis is to investigate on the benefits
that the introduction of a centralized decision support system can bring with
respect to an uncoordinated decentralized one; the case study of coordination
between the Packaging department and the subsequent Distribution stage of
the real pharmaceutical plant, i.e between the packaging of final products and
their distribution to wholesalers, is addressed. The distribution problem can
be formulated as a vehicle routing problem with soft time windows. A classic tabu search algorithm is adapted to solve the scheduling problems of the
two departments separately with a decentralized approach, and then to face
the combined scheduling and delivery problem with a centralized approach.
Results have demonstrated that the centralized approach provides much better results than the decentralized one with even a less computational effort.
These results confirm the need for excellence in the coordination among different stages of the production process, other than within each stage, when high
standards of quality and reliability must be provided to the final customers.</Abstract>2010-03-29T22:00:00ZOn the existence and optimality of some planar graph embeddingsAngelini, Patriziohttp://hdl.handle.net/2307/6502011-10-24T23:38:05Z2010-03-29T22:00:00Z<Title>On the existence and optimality of some planar graph embeddings</Title>
<Authors>Angelini, Patrizio</Authors>
<Issue Date>2010-03-30</Issue Date>
<Abstract>A graph is an abstract mathematical representation of a set of objects, called vertices, together
with a pairwise relationship between such objects, that is represented by a collection of edges
connecting pairs of vertices. Examples of relationships among objects that are representable by a
graph can be found in every ﬁeld, ranging from interpersonal relationships to computer networks
and from knowledge representation to bioinformatics. Of course, the best way to make such a
relationship clearly understandable is to visualize the graph so that vertices and edges are easily
recognizable at human eye. Such an issue is addressed in the research ﬁeld of Graph Drawing,
which can be regarded as a cross between the areas of Graph Theory, Graph Algorithmic, and
Computational Geometry.
In Graph Drawing, the most common way to visualize a graph is to draw each vertex as a point
in the plane and each edge as a curve connecting the corresponding points. The placement of the
vertices in the plane and the drawing of the curves should be performed in such a way that the
resulting drawing be nice and readable, in the sense that the information described by the graph
should be possibly understandable at a glance. In order to obtain the desired nice and readable
visualization, it is important to formalize the aesthetic criteria that distinguish a “good” drawing
from a “bad” one. Then, the goal of Graph Drawing is to create algorithms that automatically
produce drawings respecting such criteria.
The most natural aesthetic criterion that one can think of is probably the absence of partial
or complete overlapping among vertices and edges, that is called planarity. Another important
criterion that one has to consider when drawing a graph is the area of the drawing, as a drawing
with a small area can be better visualized inside a small screen. Observe that, while planarity is a
property that a drawing may satisfy or not, the drawing area is a measure of quality that can be used
to compare two drawings. Many other properties and quality measures can be deﬁned concerning
the visualization of graphs, even with the possibility of combining some of them. However, the
“best” drawing of a graph might not exist, since drawings that are “good” in terms of a certain
criterium may be “bad” in terms of another.
It is interesting to observe that some of the aesthetic criteria of a drawing of a graph only
depend on its topological features, while other criteria also depend on the geometrical realization.
For example, an important theorem in Graph Drawing, known as Fary’s Theorem, states that every
graph admitting a planar drawing also admits a planar drawing in which edges are represented by
straight-line segments. This implies that, in order to decide whether a given graph admits a planar
drawing, it is possible to study whether such a graph admits a planar embedding, that is, a circular
ordering of the edges around each vertex that determines no topological crossing in the induced
drawing, rather than computing the actual coordinates of the points representing the vertices. On
the other hand, given a graph with a ﬁxed embedding, diﬀerent geometrical realizations may lead
to drawings with diﬀerent area.
Several natural and interesting to study questions can be formulated concerning the aesthetic
criteria deﬁned for the graphical representation of graphs.
When considering a graph property, the ﬁrst, and probably most natural, arising question is
the one about the existence of graphs satisfying such a property and of graphs not satisfying it. If
both positive and negative instances exist, the problem can then be studied by either asking for
a characterization of the family of graphs that satisfy the desired property, or by determining the
computational complexity of the problem of deciding whether a given graph satisﬁes the property.
On the other hand, when considering a particular measure of quality of a drawing or of an
embedding, the two most natural questions are certainly the one asking for the optimal value of
such a measure among all the graphs of a certain family and the one asking for an eﬃcient algorithm
that optimizes such a measure for a given graph.
In this thesis we address and partially answer such questions on several classes and types
of graphs, as we propose algorithms for computing planar embeddings or drawings that satisfy
certain properties or that are optimal with respect to certain measures of quality. We mainly
deal with planar graphs and with graph properties and measures that depend on the topology
of the graph (Part II) and on its geometry (Part III). Also, we consider the same questions on
simultaneous graph drawing problems (Part IV), that is, problems involving more than one graph,
and on clustered graphs (Part V), that is, graphs where the vertices are grouped into clusters by
means of a hierarchical structure.</Abstract>2010-03-29T22:00:00ZQuality of mappings for data exchange applicationsRaunich, Salvatorehttp://hdl.handle.net/2307/6242011-10-13T23:35:34Z2010-03-29T22:00:00Z<Title>Quality of mappings for data exchange applications</Title>
<Authors>Raunich, Salvatore</Authors>
<Issue Date>2010-03-30</Issue Date>
<Abstract>Research has investigated mappings among data sources under two perspectives.
On one side, there are studies of practical tools for schema mapping generation;
these focus on algorithms to generate mappings based on visual specifications
provided by users. On the other side, we have theoretical researches about data
exchange. These study how to generate a solution -- i.e., a target instance -- given a
set of mappings usually specified as tuple generating dependencies.
However, these two research lines have progressed in a rather independent way
and we are still far away from having a complete understanding of the properties
that a "good" schema mapping system should have; to give an example, there are
many possible solutions for a data exchange problem.
In fact, there is no consensus yet on a notion of quality for schema mappings. In this
thesis, based on concepts provided by schema mapping and data exchange
research, we aim at investigate such a notion.
Our goal is to identify a fairly general formal context that incorporates the different
mapping-generation systems proposed in the literature, and to develop algorithms,
tools and methods for characterizing the quality of mappings generated by those
systems.</Abstract>2010-03-29T22:00:00ZUnderstanding and detecting BGP instabilitiesCittadini, Lucahttp://hdl.handle.net/2307/6172011-10-13T23:37:44Z2010-03-29T22:00:00Z<Title>Understanding and detecting BGP instabilities</Title>
<Authors>Cittadini, Luca</Authors>
<Issue Date>2010-03-30</Issue Date>
<Abstract>Communication networks have reached amazing size and complexity nowadays.
The Internet, which was born as an experimental network connecting a handful
of volunteer research institutes, has grown to become a huge distributed system
interconnecting almost 700 millions of hosts at present. As soon as it was
clear that computer networks would have driven the information revolution, the
Internet drew a lot of interest both from academia and from industry. Moreover,
the demand for features that were not envisaged when the Internet was designed
grew alongside with the size and complexity of the Internet itself. Routing,
that is, ﬁnding a path in a network that interconnects a given source to a
given destination, also needed to evolve accordingly: as soon as the Internet got
into its commercial era, there was a strong demand for routing protocols that
supported policies.
Among the wide variety of routing protocols that can be found today in
the Internet, the Border Gateway Protocol (BGP) is responsible for connecting large administrative domains (called Autonomous Systems, or ASes), each
administering its own network. BGP conﬁguration languages allow network administrators to deﬁne ﬁne-grained policies to inﬂuence the selection and the dissemination of routes over the network, and is therefore classiﬁed as a policy-based
interdomain routing protocol. BGP policies allow each AS to autonomously conﬁgure its network in order, e.g., to minimize the cost of routing traﬃc, or to
optimize delay.
Ideally, BGP was designed to let each administrative domain choose the
best route (where “best” obviously has local signiﬁcance) given the alternatives
proposed by neighboring ASes. Unfortunately, as it is often the case in other
branches of computer science, many agents that independently pursue a local
optimum do not always converge into a global optimum. In particular, it has
been shown that there exist sets of BGP policies that cannot be satisﬁed at
the same time, and trap the protocol in inﬁnite oscillations in which a stable
routing choice is never reached. This fact spurred lots of research eﬀorts towards
techniques to characterize, discover, mitigate and eliminate BGP instabilities.
This thesis presents novel research contributions as well as related work
regarding the characterization and the detection of BGP instabilities under a
common framework. We cover both the necessary theoretical background, as
well as practical techniques and methodologies to analyze real BGP networks.
First, we tackle the problem of ﬁnding a suitable model for studying BGP oscillations. This is indeed a nontrivial task, as many of the simplifying assumptions
that have often been made to ease the analysis provably make the model unable
to capture certain kinds of routing instabilities. Besides allowing us to pick the
model that is best ﬁt to study oscillations, the insight provided by our study
also makes us able to review related work with a deeper understanding of the
interplay among many diﬀerent models for BGP.
This thesis makes three main contributions. First, we show a suﬃcient and
necessary condition for BGP safety under ﬁltering, that is, the property of a
BGP network to have guaranteed convergence under arbitrary ﬁltering of BGP
routes. To the best of our knowledge, this is the ﬁrst complete characterization
of safety under ﬁltering. We exploit this ﬁnding to show a debugging technique
that is able to spot the potential trouble points of a network by just analyzing
two diﬀerent routing states.
Second, we study the possibility of manipulating internal BGP (iBGP) attributes. While in general such a practice exacerbates the BGP stability problem, adherence to simple guidelines ensures BGP stability while still providing
some beneﬁts in terms, e.g., of traﬃc engineering capabilities.
Third, we devise and implement an algorithm which is able to tell whether
a given BGP network is stable. This algorithm is provably free from false
positives, and it is able to pinpoint the trouble points of a potentially unstable
network. We show that this algorithm, together with techniques to perform
some preprocessing on BGP networks, can be implemented eﬃciently enough
to deal with Internet scale BGP topologies as well as very large iBGP networks.
Finally, we propose a BGP monitoring system that is able to collect BGP data
in such a way to enable the analysis of what-if scenarios.</Abstract>2010-03-29T22:00:00ZAdaptive techiniques in web-based educationVaste, Giuliahttp://hdl.handle.net/2307/6162011-10-13T23:37:58Z2010-03-29T22:00:00Z<Title>Adaptive techiniques in web-based education</Title>
<Authors>Vaste, Giulia</Authors>
<Issue Date>2010-03-30</Issue Date>
<Abstract>This dissertation proposes the use of artiﬁcial intelligence methodologies
and techniques for providing personalization of web-based courses. Almost all
web-based systems try to carry out personalization in order to be more useful,
more attractive, and more eﬃcient in fulﬁlling user’s needs. However, personalization has a cost in terms of background operations, for instance in educational
systems in terms of teacher’s effort. What is a reasonable compromise between
soﬁsticated personalization methodologies and their realization? Is it possible to provide personalization with an acceptable effort for the domain expert
responsible for contents producing?
This dissertation focuses on these questions, considering in particular systems for web-based education, and proposes a methodology that aims to carry
out a reasonable compromise between an effective personalization from the student’s point of view and from the teacher’s point of view, or, in general, from
the user’s point of view and from the domain expert’s point of view.
From the student’s point of view, personalization is provided on the basis
of student’s knowledge and learning styles and guiding the student during the
fruition of the course, like a teacher could do: proposing a sequence of contents
suitable for the student at the beginning of the course and performing recovery
strategies, during the fruition of the course, if the study does not proceed as it
should.
From the teacher’s point of view personalized courses are generated automatically, on the basis of the student model. The teacher is required to
specify few metadata, necessary for characterizing learning materials, such as
prerequisite relations and suitability of contents for a given type of student.
The teacher is helped by a graphical interface, allowing a global vision of the
course, and he can express didactic preferencies, such as the level of the course,
according to the Bloom’s Taxonomy. The effort required to the teacher is as
near as possible to his “way of thinking”: prerequisite relations are generally
deﬁned, even if implicitly, when a course is arranged; learning materials are
tagged, according to the Felder and Silverman’s learning styles model. Ad-
iv
herence to this model is not, however, a strict constraint for the teacher: the
estimate of student’s learning styles is in fact updated taking into account the
teacher’s tagging of the learning materials. In this way, relevance is given to
the matching between the teacher’s tagging of the learning materials and the
student’s way of learning: if the student studies a given material with success,
the material is considered suitable for the student and his learning styles are
updated towards the weights given by the teacher to that material.
On the basis of the above-mentioned methodologies the LS-Plan system
has been proposed. LS-Plan provides educational hypermedia with adaptivity; it has been integrated in the Lecomps educational hypermedia in order
to carry out evaluations both from the student’s and from the teacher’s point
of view. A layered and an as a whole evaluation, together with evaluations of
teacher’s functionalities, have been performed and have shown positive results.
Different approaches have been proposed in the literature for curriculum
sequencing, that is “help the student to ﬁnd an “optimal path” through the
learning material”. According to the aim of providing support for teachers in
performing personalization, a suitable system, LS-Lab, for comparing different
algorithms has been proposed. LS-Lab provides a uniform environment in
which several algorithms can be compared using the same input, i.e. the same
set of didactic materials, the same sample student models and the same learning
objective. The subjective comparison, made by teachers or domain experts, is
supported by some metrics and by the visualization of the produced sequences.
According to the necessity of providing an easy-to-use personalization for
background actors, the personalization methodologies proposed for the educational domain have been applied also for cultural visits personalization. Analogies and differences between course personalization and cultural visits personalization have been detected and the framework for course personalization has
been adapted and enhanced taking into account visitor’s interests.</Abstract>2010-03-29T22:00:00ZA methodology for generating grammars for multimodal languagesD'Ulizia, Ariannahttp://hdl.handle.net/2307/5232011-07-12T00:03:19Z2009-04-01T22:00:00Z<Title>A methodology for generating grammars for multimodal languages</Title>
<Authors>D'Ulizia, Arianna</Authors>
<Issue Date>2009-04-02</Issue Date>
<Abstract>Human communication is naturally multimodal. People normally
interact through several communication channels, such as gesture,
drawing, handwriting, facial expressions, gaze in combination with
speech or speech only, which is the prevalent modality. This
synergistic use of multiple interaction channels makes human
communication flexible, natural and robust. In the last years several
efforts have been made to endow computer interface with similar
flexibility, naturalness and robustness. The research presented in
this thesis represents one of this effort.
The main contributions of this thesis are twofold. First of all, it
provides a methodology for multimodal language definition that is
general enough to be applicable for whatever modalities and in
whichever domains. Secondly, it provides an efficient incremental
learning algorithm that, following an approach "by example",
allows to generate the production rules of the defined grammar
starting from the acceptable multimodal sentences.</Abstract>2009-04-01T22:00:00ZCoherence problem between Business Rules and Business ProcessesLezoche, Mariohttp://hdl.handle.net/2307/5222011-07-12T00:03:16Z2009-04-01T22:00:00Z<Title>Coherence problem between Business Rules and Business Processes</Title>
<Authors>Lezoche, Mario</Authors>
<Issue Date>2009-04-02</Issue Date>
<Abstract>Business Process (BP) transformation is a key aspect of BP lifecycle. There are several reasons that may cause BP modi.cations. Among these, particularly important are the changes of the enterprise organization and operation strategies, which can be captured by business rules (BRs). This work focuses on a BP-based organization that is regulated by a set of BRs: such BPs and BRs need to be globally consistent (and have to be maintained consistent after any changes). In this thesis is presented an ontological approach capable of representing BRs and BPs in a coherent way. Then, the objective is identifying all processes in the BP repository that are (or have become) inconsistent with the BRs and thus need to be changed to reestablish the overall consistency.</Abstract>2009-04-01T22:00:00ZDealing with multimodal languages ambiguities : a classification and solution methodCaschera, Maria Chiarahttp://hdl.handle.net/2307/5212011-07-12T00:03:15Z2009-04-01T22:00:00Z<Title>Dealing with multimodal languages ambiguities : a classification and solution method</Title>
<Authors>Caschera, Maria Chiara</Authors>
<Issue Date>2009-04-02</Issue Date>
<Abstract>Starting from discussing the problem of ambiguity and its pervasiveness on
communication processes, this thesis dissertation faces problems of
classifying and solving ambiguities for multimodal languages.
This thesis gives an overview of the works proposed in literature about
ambiguities in natural language and visual languages and discusses some
existing proposals on multimodal ambiguities. An original classification of
multimodal ambiguities has been defined using a linguistic perspective,
introducing the notions of multimodal grammar, multimodal sentence and
multimodal language.
An overview of methods that the literature proposes for avoiding and
detecting ambiguities has been done. These methods are grouped into:
prevention of ambiguities, a-posterior resolution and approximation
resolution methods. The analysis of these methods has underlined the
suitability of Hidden Markov Models (HMMs) for disambiguation
processes. However, due to the complexity of ambiguities for multimodal
interaction, this thesis uses the Hierarchical Hidden Markov Models to
manage the semantic and syntactic classes of ambiguities for multimodal
sentences; this choice permits to operate at different levels going from the
terminal elements to the multimodal sentence. The proposed methods for
classifying and solving multimodal ambiguities have been used to design
and implement two software modules. The experimental results of these
modules have underlined a good level of accuracy during the classification
and solution processes of multimodal ambiguities.</Abstract>2009-04-01T22:00:00ZRoot cause analysis and forensics in interdomain routing: models, methodologies and toolsRefice, Tizianahttp://hdl.handle.net/2307/5142011-07-05T00:03:14Z2009-04-01T22:00:00Z<Title>Root cause analysis and forensics in interdomain routing: models, methodologies and tools</Title>
<Authors>Refice, Tiziana</Authors>
<Issue Date>2009-04-02</Issue Date>
<Abstract>The Internet is an interconnection of administrative domains called Autonomous
Systems (ASes). Each AS contains one or multiple destination networks and
each network is identified by an IP prefix. The Border Gateway Protocol
(BGP) [RLH06] is the de-facto standard routing protocol used to exchange
reachability information among ASes and a BGP session between two distinct
ASes is called peering. Each AS learns through BGP its "best" route towards
each destination in the Internet, updates it in response to network events (e.g.,
link failures, router resets, or policy changes) and propagates the change by
BGP messages called updates. The propagation of BGP updates can be par-
tially controlled via routing policy specifications.
In order to investigate the Internet behavior over time, several repositories
provide historical data. Since 1997 and 1999, respectively, the University of
Oregon RouteViews Project (RV ) [roua] and the RIPE NCC Routing Infor-
mation Service (RIS) [roub] spread worldwide passive monitors (or vantage
points), which continuously gather BGP routing data from the Internet, per-
manently store them and make them publicly available. Currently, there are
about 800 such monitors. Also, in 1995 the Internet Routing Registry (IRR)
was established and started collecting inter-AS routing policies of many of the
networks in the Internet with the main purpose to promote stability, consis-
tency, and security of the global interdomain routing.
As the Internet becomes a more and more critical infrastructure, the need
for understanding and (at least at some extent) controlling the interdomain
routing increases. Internet Service Providers (ISPs) - in order to improve the
quality of service offered to their customers - want to monitor the reachability of
specific prefixes, check the effectiveness of their own routing policies, and assess
the impact of traffic engineering configurations. In this context, it is crucial to
be able to detect and debug misconfigurations or faults, in order to possibly
fix them. More generally, the problem of identifying Internet events, locating
their root causes, and understanding their dynamics is attracting increasing
attention from both researchers and network operators.
However, despite the large amount of research effort, routing dynamics di-
agnosis remains very difficult for several reasons: (i) The system has a sheer
size. As of December 2008, there are about 280, 000 prefixes and more than
30, 000 Autonomous Systems densely connected between each other. (ii) The
Internet is highly dynamic. In fact, RIS' and RV's monitors currently receive
an average of about 1, 500 BGP updates per minute, with peaks of more than
50, 000 updates per minute. (iii) Due to complex interconnects among ASes
and routing policies, the effects of network events are often separated (both in
time and space) from their causes and different vantage points record different
data in response to the same routing changes. Also, multiple routing events
can occur simultaneously. Overall, given such size and dynamics, "naive" ap-
proaches to extract relevant information from the Internet routing data are
neither effective nor efficient.
Therefore, both researchers and network operators interested in understand-
ing the interdomain routing have to cope with several major challenges. First,
in order to deal with such a huge and complex network, they need to define
what to measure, i.e., they need a model of the Internet routing that captures
the main dynamics, filtering out the "noise" (e.g., routing changes that do not
provide information relevant to the identification of network events). Based
on such model, they need a methodology that, given the currently available
data sources, detects network events and infers when and where they hap-
pened. Furthermore, they need tools that efficiently handle the huge amount
of data, support the analysis of the network behavior over time, and provide
real-time information in order to spot and possibly fix outages as soon as they
occur. Since the analysis of network events often requires manual work, ef-
fective paradigms for the visualization of routing data are also very helpful.
Previous works leave most of these problems still open.
The research work described throughout this thesis addresses these prob-
lems and proposes approaches to (at least partially) solve them. Namely, this
thesis presents the following contributions.
This thesis illustrates a new perspective to drive the analysis of the Internet
dynamics without getting lost in the huge BGP dataset. Basically, while previ-
ous works usually address the root cause analysis from a "global perspective"
- i.e., by taking into account the dynamics of the whole Internet and trying to
identify major events affecting it - this thesis tackles the same problem with
an ISP-oriented approach: it assumes that ISPs are usually more interested in
the reachability of their own prefixes, rather than in the status of the whole
Network; hence, it focuses the analysis on user-specified prefixes and corre-
lates their behaviors to the global Internet dynamics. In particular, this thesis
formally models the Internet as a flow-based system, where monitors are the
sources of the flows and ASes originating BGP updates are the sinks. This
thesis also defines a methodology which correlates such flow variations to rout-
ing changes in order to spot network events and the root causes that triggered
them. Furthermore, BGPath has been developed to support this methodol-
ogy and this thesis describes its main features. BGPath is a publicly available
tool that uses BGP data collected by the RIS and the RV projects and pro-
vides the user with routing information from both a single and cross-vantage
point views. BGPath also assesses the reliability of the collection system, in
order to avoid measurement artifacts. The algorithms BGPath relies on are
shown to efficiently process huge streams of BGP data, fulfilling nearly-real
time constraints.
While the ISP-oriented approach presented in this thesis gives a good in-
sight on both major and minor events affecting specific portions of the Internet,
approaching the root cause analysis problem from a "global perspective" usu-
ally does not provide with such fine-grained results. On the other hand, the
global approach is critical to identify major interdomain events, without any a-
priori knowledge of the prefixes and/or the ASes involved. This thesis explores
this perspective too. Specifically, this thesis proposes a novel methodology
based on the Principal Component Analysis (PCA), a well-known statistical
technique that is commonly used to reduce the number of dimensions of multi-
dimensional datasets in order to highlight the most significant trends of the
data. Since the interdomain routing dataset is inherently multi-dimensional
(in time, space, prefixes, observation points, ...), this thesis suggests to apply
the PCA to this dataset in order to identify the most significant contributors
to the Internet dynamics.
BGP data collected by RIS' and RV's monitors provide a detailed view
of the actual status of the interdomain routing. However, it does not report
all the inter-AS peering relationships which are not active. For example, in
"normal" conditions, backup links do not appear in the routing tables. Still,
in order to understand the reasons behind some network events and to pre-
dict the evolution of the routing when an event occurs, such information is
actually very important. To cope with the intrinsic limitations of the RIS and
RV dataset, this thesis analyzes the data stored in the Internet Routing Reg-
istry and describes how to extract peering relationships from routing policies
collected within. Moreover, the proposed approach specifies how to solve in-
consistencies among the distinct databases the IRR consists of. The obtained
results show that - even though the IRR data is often out-of-date, it still pro-
vides a quite unique amount of topological information which usually does not
appear in the global routing.
The research work described in the thesis relies on the assumption that
Internet is a graph where ASes are atomic entities in the interdomain rout-
ing. However, recent papers [MFM+
06, MUF+
07] show that such a model can
mislead the understanding of the global routing behavior. Thus, this thesis in-
vestigates this problem by measuring the route diversity that can be observed
by passive remote vantage points, defining a methodology to compute it from
a dynamic BGP dataset and characterizing it in terms of location of ASes in
the Internet customer-provider hierarchy and choice of monitors.
The thesis documents forensic analysis of two well-know events that oc-
curred at the beginning of 2007, where models, methodologies and tools de-
scribed in the thesis are exemplified using real case studies.</Abstract>2009-04-01T22:00:00ZOutsourced storage services : authentication and security visualizationPalazzi, Bernardohttp://hdl.handle.net/2307/5102011-07-05T00:03:38Z2009-04-01T22:00:00Z<Title>Outsourced storage services : authentication and security visualization</Title>
<Authors>Palazzi, Bernardo</Authors>
<Issue Date>2009-04-02</Issue Date>
<Abstract>We address the problem of authenticating data in outsourced, often un-
trusted, services, when a user stores more or less confidential information in a
remote service such as an online calendar, remote storage, outsourced DBMS,
and others. How can outsourced data be proven authentic?
Data authentication captures the security needs of many computing applications that save and use sensitive information in hostile remote distributed
environments and its importance increases, given the current trend in modern
system design towards outsourced services with minimal trust assumptions.
Solutions should not only be provably secure, but efficient and easily implementable.
This dissertation presents an extensive study of data authentication and
introduces a general method, based on a security middleware, external to the
service, that performs authentication operations in parallel with standard service functions to minimize the time overhead. We examine the problem for different services, and design efficient new techniques with authenticating general
classes of operations, such as relational primitives, multidimensional queries
and relational join and remote storage management.
Another important issue that we cover in this dissertation is the security
usability of outsourced services. In particular we analyze the information security visualization techniques and we address the problem of file permissions
visualization. TrACE, a prototype tool based on a treemap is presented with
an extensive user study to show the usability improvement of this tool.</Abstract>2009-04-01T22:00:00ZSmall Screens and Large Graphs: Area-Effcient Drawings of Planar Combinatorial StructuresFrati, Fabriziohttp://hdl.handle.net/2307/5032011-07-05T00:02:54Z2009-04-01T22:00:00Z<Title>Small Screens and Large Graphs: Area-Effcient Drawings of Planar Combinatorial Structures</Title>
<Authors>Frati, Fabrizio</Authors>
<Issue Date>2009-04-02</Issue Date>
<Abstract>Small Screens and Large Graphs:
Area-Efficient Drawings of Planar Combinatorial Structures
Fabrizio Frati
Dipartimento di Informatica e Automazione - Roma Tre University
Abstract
Graphs are the most widely used data structures to represent relationships among objects.
Maps, networks, circuits, molecules, compounds are a few examples of structures that are commonly
represented by graphs. The clearest way to express the information conveyed in a graph is to
visualize it. Namely, a drawing of a graph represents each object (in the graph terminology:
vertex) of the graph as a point in the plane and each relationship (in the graph terminology: edge)
between two objects as a line connecting the corresponding points. However, not every drawing
can be regarded as a good representation of the graph. In fact, a drawing should be readable, that
is, the human eye should be able to easily identify the relationships among the objects in the graph
at the first glance to the drawing. Clearly, this is not a formal definition of what differentiates a
good drawing from a bad drawing. However, a few topological and geometric features have been
recognized and accepted as the criteria a drawing should satisfy in order to be readable.
Planarity is probably the best characteristic a drawing can have. The absence of intersections
between the edges of the graph allows a viewer to easily distinguish the line representing any
edge and hence to immediately understand which are the vertices connected by the edge. From a
geometric perspective, it would be preferable that edges are drawn as straight-lines, namely edges
bending and repeatedly changing direction are detrimental for the readability of the drawing. When
the straight-line requirement can not be met, it would be still desiderable to have edges drawn as
poly-lines bending only a limited number of times.
When the size of the graph to be represented is too large in order for the drawing to be
constructed manually, there is a need for an algorithm automatically constructing such a drawing.
Graph Drawing deals with the design of algorithms to automatically construct drawings of graphs.
Usually a Graph Drawing algorithm takes as an input a graph, a set of requirements the drawing
must satisfy (as being planar or having straight-line edges), and a set of aesthetics the drawing
should satisfy as much as possible. The most important aesthetic a drawing should satisfy is
probably the one of having a small area. In fact, automatic drawings usually have to be displayed
on a computer screen of bounded size, hence they have to fit in the space available on the screen.
The study of graph drawings in small area has been first motivated by the design of VLSI circuits
and has attracted intense research efforts for almost thirty years now.
In this thesis we deal with algorithms and bounds for drawing graphs in small area. We mainly
deal with planar graphs (Part I of the thesis), series-parallel graphs and outerplanar graphs (Part
II), trees (Part III), and clustered graphs (Part IV), and for each of these graph classes we consider
the problem of obtaining drawings in small area under a large number of drawing conventions (e.g.,
straight-line, poly-line, orthogonal, upward). We design several algorithms for the construction
of graph drawings in small area and we obtain lower bounds for the area requirements of several
drawing styles. The graph classes and the drawing conventions we consider are among the most
commonly used for applications. Nevertheless, the beauty of some combinatorial, topological, and
geometric problems concerning the construction of graph drawings in small area justifies their study
even when looking at them from a purely theoretical point of view.
1</Abstract>2009-04-01T22:00:00ZInteroperability of Semantic AnnotationsPaolozzi, Stefanohttp://hdl.handle.net/2307/5022011-07-05T00:03:25Z2009-04-01T22:00:00Z<Title>Interoperability of Semantic Annotations</Title>
<Authors>Paolozzi, Stefano</Authors>
<Issue Date>2009-04-02</Issue Date>
<Abstract>Abstract
The Semantic Web is the new generation World Wide Web. It extends
the Web by giving information a well defined meaning, allowing it to be pro-
cessed by machines. This vision is going to become reality thanks to a set
of technologies which have been specified and maintained by the World Wide
Web Consortium (W3C), and more and more research efforts from the industry
and the academia. Therefore, the basis for the Semantic Web are computer-
understandable descriptions of resources. We can create such descriptions by
annotating resources with metadata, resulting in "annotations" about that re-
source. Semantic annotation is the creation of metadata and relations between
them with the task of defining new methods of access to information and en-
riching the potentialities of the ones already existent. The main goal is to
have information on the Web, defined in such a way that its meaning could be
explicitly interpreted also by automatic systems, not just by humans.
There is huge amount of interesting and important information represented
through semantic annotations, but there are still a lot of different formalisms
showing a lack of standardization and a consequent need of interoperability.
This growing need of interoperability in this field convinces us to extend
our first proposal, strictly related to database models, in order to address also
semantic annotations. Our proposal, mainly based on Model Management
techniques, focuses on the problem of translating schemas and data between
Semantic Web data models and the integration of those models with databases
models that are a more rigid and well-defined structure.
In this work we underline the main concepts of our approach discussing
a proposal for the implementation of the model management operator Model-
Gen, which translates schemas from one model to another focusing on semantic
annotation context. The approach expresses the translation as Datalog rules
and exposes the source and target of the translation in a generic relational dic-
tionary. This makes the translation transparent, easy to customize and model-
independent. The proposal includes automatic generation of translations as
composition of basic steps.</Abstract>2009-04-01T22:00:00ZModeling and interoperability: a high level perspectiveDel Nostro, Pierluigihttp://hdl.handle.net/2307/5002011-07-05T00:02:49Z2009-04-01T22:00:00Z<Title>Modeling and interoperability: a high level perspective</Title>
<Authors>Del Nostro, Pierluigi</Authors>
<Issue Date>2009-04-02</Issue Date>
<Abstract>Abstract
This thesis tackles modeling and interoperability issues in different con-
texts. We started by studying different Semantic Web models with the goal of
translating from one to another by means of a model independent approach.
The metamodel approach that we follow is called MIDST and is based on the
concept of supermodel, a generic model that we use to describe other models.
We have extended this approach, to allow the interoperability between Seman-
tic Web formalisms. MIDST leverage on a relational dictionary that we have
exploited as a repository for RDF documents. The logical organization that we
have defined, together with tuning techniques at the physical level, allows us
to obtain a framework for storing and querying RDF, that produced great re-
sults in terms of performance and scalability. Following the experience gained
in modeling Semantic Web models, we have produced a new enhancement in
MIDST expressivity, allowing the interchange of information between ontolo-
gies and databases. Changing context, this thesis finally describe a framework
for the modeling of time in data-intensive Web sites. We here developed a tool
that allows to automatically generate the Web site as a consequence of the
design choices.</Abstract>2009-04-01T22:00:00ZCombinatorial structures for communication networksSperduto, Eziohttp://hdl.handle.net/2307/4272011-06-17T00:01:44Z2009-04-01T22:00:00Z<Title>Combinatorial structures for communication networks</Title>
<Authors>Sperduto, Ezio</Authors>
<Issue Date>2009-04-02</Issue Date>
<Abstract>This thesis deals with a class of theoretical problems arising in applications
in communication networks. The dissertation is mainly divided in two parts.
In the first part, we attempt to solve a set of survivability network design
problems. Network survivability refers to the guarantees that a communication
network provides in the event that one or more failures occur. An attack or
failure can significantly reduce the capability of the communication network to
efficiently deliver basic services to users. In several cases, when a failure occurs,
the network operators are interested in restoring traffic by re-routing it through
different links. Since re-routing traffic can be rather expensive and may cause
delays in transmissions, a key property is that of requiring that traffic which is
not affected by the failure is not redirected in failure situations.
We study the problem of determining whether a given network, where the traffic
is commonly routed on the edges of a shortest path tree (e.g. Ethernet networks
with the Spanning Tree Protocol), may satisfy the above mentioned property.
We provide computational complexity results for directed and undirected net-
works. In particular, for the directed case, we prove that such problem is in
general NP-hard and that it remains NP-hard also in some special cases. More-
over, we show how to assign weights to the links of the network in order to
configure a routing topology with the above mentioned property.
In the second part of the thesis, we deal with a problem regarding broadcast-
ing in telecommunication networks. We investigate a new version of the well
known Minimum Broadcast Time problem which has been deeply studied in
the past, since broadcasting is a basic primitive in the communication networks
area. Fundamental requirements for a broadcast process are that it completes in
the quickest way and that, at the end of the procedure, all the peers in the net-
work are informed. In this thesis we deal with an objective function that takes
into account the quality of the service associated with the broadcast, namely
the minimization of the average broadcast time of the peers.
We show that the considered version of the broadcast problem is an NP-hard
problem. Indeed, the problem becomes polynomially solvable, if the instance
graph is a tree. We also provide a distributed approximation algorithm for our
version of the broadcast problem, in which every network node does not know
the network topology.</Abstract>2009-04-01T22:00:00ZMathematical model and statistical learning of HIV- 1 genetic evolution mechanisms under drug pressure for treatment optimisationProsperi, Mattiahttp://hdl.handle.net/2307/1192008-12-12T01:31:00Z2008-04-06T22:00:00Z<Title>Mathematical model and statistical learning of HIV- 1 genetic evolution mechanisms under drug pressure for treatment optimisation</Title>
<Authors>Prosperi, Mattia</Authors>
<Issue Date>2008-04-07</Issue Date>2008-04-06T22:00:00ZThe localization problem : From robotics to sensor networksGasparri, Andreahttp://hdl.handle.net/2307/1142008-12-12T01:30:50Z2008-04-06T22:00:00Z<Title>The localization problem : From robotics to sensor networks</Title>
<Authors>Gasparri, Andrea</Authors>
<Issue Date>2008-04-07</Issue Date>2008-04-06T22:00:00Z