Logical leaps and quantum connectives: forging paths through predication space
Quantum Informatics for Cognitive, Social, and Semantic Processes: Papers from the AAAI Fall Symposium (FS-10-08)
Logical Leaps and Quantum Connectives: Forging Paths through Predication Space Trevor Cohen1, Dominic Widdows2, Roger W. Schvaneveldt3, and Thomas C. Rindflesch4
1Center for Cognitive Informatics and Decision Making, University of Texas Health Science Center at Houston
3Department of Applied Psychology, Arizona State University
Abstract
These predications are extracted from citations added to MEDLINE, the most comprehensive database of
The Predication-based Semantic Indexing (PSI) approach
biomedical literature, over the past decade using the
encodes both symbolic and distributional information into a
SemRep system (Rindflesch and Fiszman, 2003). We
semantic space using a permutation-based variant of
proceed by presenting the methodological roots and
Random Indexing. In this paper, we develop and evaluate a
implementation of the PSI model, and follow with a
computational model of abductive reasoning based on PSI.
discussion of the ways in which abduction can be
Using distributional information, we identify pairs of
simulated in the PSI space. Finally, we explore the use of
concepts that are likely to be predicated about a common
quantum-inspired approaches to concept combination to
third concept, or middle term. As this occurs without the
constrain the process of abduction, with the aim to identify
explicit identification of the middle term concerned, we
associations between concepts that are of interest for the
refer to this process as a “logical leap”. Subsequently, we use further operations in the PSI space to retrieve this
purpose of biomedical knowledge discovery.
middle term and identify the predicate types involved. On evaluation using a set of 1000 randomly selected cue concepts, the model is shown to retrieve with accuracy
Background
concepts that can be connected to a cue concept by a middle term, as well as the middle term concerned, using nearest-
Abduction, Similarity and Scientific Discovery
neighbor search in the PSI space. The utility of quantum logical operators as a means to identify alternative paths
Abductive reasoning, as defined by the philosopher and
logician, C. S. Peirce (1839-1914) is concerned with the generation of new hypotheses given a set of observations. Inductive and deductive reasoning can be applied to
Introduction
confirming and disproving hypotheses, but abductive reasoning is concerned with the discovery of hypotheses as
The development of alternative approaches to automated
candidates for further testing. Abductive reasoning does
reasoning has been a concern of the Quantum Interactions
not necessarily produce a correct hypothesis, but effective
(QI) community since its inception. One line of inquiry has
abductive reasoning should lead to plausible hypotheses
explored the utility of distributional models of meaning as
worthy of further examination and testing. Several factors
a means of simulating abduction, the generation of new
can be seen to be at work in abductive reasoning
hypotheses, in a computationally tractable manner (Bruza,
(Schvaneveldt and Cohen, 2010). Among these is
Widdows, and Woods, 2006). Another concern has been
establishing new connections between concepts. For
the combination between symbolic and distributional
example, consider information scientist Don Swanson's
models, and ways in which mathematical models derived
seminal discovery of a therapeutically useful connection
from quantum theory might be applied to this end (Clark
between Raynaud's disease and fish oil (Swanson, 1986).
and Pulman, 2006). This paper describes recent
These concepts had not occurred together in the literature,
developments along these lines resulting from our work
but were connected to one another by Swanson by
with Predication-based Semantic Indexing (PSI) (Cohen,
identifying potential bridging concepts that did occur with
Schvaneveldt, and Rindflesch, 2009), a novel distributional
Raynaud's disease (such as blood viscosity). Concepts
model that encodes predications, or object-relation-object
occurring with such bridging concepts were considered as
triplets into a vector space using a variant of the Random
candidates for literature-based discovery. Bruza and his
Indexing model (Kanerva, Kristofersson, and Holst, 2000).
colleagues note that Swanson's discovery is an example of
each term is assigned both a sparse elemental vector, and
abductive discovery, and argue that, given the constraints
a semantic vector of a pre-assigned dimensionality several
of the human cognitive system, deductive logic does not
orders of magnitude less than the number of terms in the
present a plausible model for reasoning of this nature
model (usually on the order of 1000). Elemental vectors
(Bruza et al., 2006). Rather, associations between terms
consist of mostly zero values, but a small number of these
derived by a distributional model of meaning, in their case
(usually on the order of 10) are randomly assigned as either
Hyperspace Analog to Language (Burgess et al., 1998), are
+1 or -1, to generate a set of vectors with a high probability
presented as an alternative, a line of investigation we have
of being close-to-orthogonal to one another on account of
also pursued in our recent work on literature-based
their sparseness. For each term in the model, the elemental
discovery (Cohen, Schvaneveldt, and Widdows, 2009) .
vector for every co-occurring term within a sliding window
Specifically, we have been concerned with the ability of
moved through the text is added to the term's semantic
distributional models to generate indirect inferences,
vector. The permutation-based model extends this
meaningful estimates of the similarity between terms that
approach, using shifting of elements in the elemental
do not co-occur with one another in any document in the
vector to encode the relative position of terms. Consider
database. Such similarities arise because concepts may co-
the following approximations of elemental vectors:
occur with other terms even though they never co-occur with one another. In the context of Swanson's discovery,
this would involve identifying a meaningful association
between Raynaud and fish oil. This association would be drawn without the explicit identification of a bridging
Vector v2 has been generated from vector v1 by shifting all
term. Having identified these associations, it would then be
of the elements of this vector one position to the right.
possible to employ some more cognitively and
These two vectors are orthogonal to one another, and with
computationally demanding mechanism such as symbolic
high-dimensional vectors it is highly probable that a vector
logic to further investigate the nature of the relationship
permuted in this manner will be orthogonal, or close-to-
between these terms. As proposed by Bruza and his
orthogonal, to the vector from which it is derived. It is
colleagues, these associations serve as “primordial stimuli
possible to reverse this transformation by shifting the
for practical inferences drawn at the symbolic level of
elements one position to the left to regenerate v1. These
cognition” (Bruza, Widdows, and Woods, 2006). The idea
properties are harnessed by Sahlgren and his colleagues to
that some economical mechanism such as association
encode the relative position of terms, providing a
might be useful in the identification of fruitful hypotheses
computationally convenient alternative to Jones and
for further exploration is appealing for both theoretical and
Mewhort's Beagle model (Jones and Mewhort, 2007),
practical reasons, the latter on account of the explosion in
which uses Plate's Holographic Reduced Representation
computational complexity that occurs when considering all
(Plate, 2003) to achieve similar ends. Both of these
possible relations of each potential bridging term in the
approaches allow for order-based retrieval. In the case of
context of scientific discovery. In addition, there is
permutation-based encoding, it is possible, by reversing the
empirical evidence that associations drawn subconsciously
permutation used to encode position, to extract from the
can precede the solution of a problem (Durso, Rea, and
resulting vector space a term that occurs frequently in a
Dayton, 1994). In the remainder of this paper, we will
particular position with respect to another term. For
discuss the ways in which similarity/association captured
example, in a permutation-based space derived from the
by a distributional model of meaning, can support both the
Touchstone Applied Sciences corpus, the vector derived by
identification and validation of hypotheses drawn from the
shifting the elements of the elemental vector for the term
biomedical literature. We begin by presenting some recent
“president” a position to the left produces a sparse vector
technical developments in the field of distributional
that is strongly associated with the semantic vectors1 for
semantics, to lay the foundation for a discussion of
the terms “eisenhower”, “ nixon”, “reagan” and “kennedy”.
Predication-based Semantic Indexing (PSI) (Cohen et al. 2009), a novel distributional model we have developed in
Predication-based Semantic Indexing (PSI)
order to simulate aspects of abductive reasoning.
While the incorporation of additional information related to word order facilitates new types of queries, and has been
Permutation-based Semantic Indexing
shown to improve performance in certain evaluations
In a previous submission to QI (Widdows and Cohen,
(Sahlgren et al., 2008), the associations derived between
2009), we discussed a recent variant of the RI model
terms are general in nature. However, it has been argued
developed by Sahlgren and his colleagues (Sahlgren, Holst,
that the fundamental unit of meaning in text
and Kanerva, 2008). Based on Pentti Kanerva's work on
comprehension is not an individual term, but an object-
sparse high-dimensional representations (Kanerva, 2009),
relation-object triplet, or proposition. This unit of meaning
this model utilizes a permutation operator that shifts the elements of a sparse high-dimensional vector in order to
encode the positional relationship between two terms in a
sliding window. In sliding-window based variants of RI,
is also termed a predication in logic, and is considered to
overlap). This constraint is too tight to support scientific
be the atomic unit of meaning in memory in cognitive
discovery, or to model abduction. Consequently, in the
theories of text comprehension (Kintsch, 1998).
current iteration of PSI in addition to adding the predicate-
In our recent work (Cohen, Schvaneveldt and
appropriate permutation of an elemental vector to the
Rindflesch, 2009) we adapt the permutation-based
semantic vector of the other concept in a predication, we
approach developed by Sahlgren et al to encode object-
also add the unpermuted elemental vector for this concept.
relation-object triplets, or predications, into a reduced-
The procedure to encode the predication “sherry ISA wine”
dimensional vector space. These triplets are derived from
would then be as follows. First, add the elemental vector
all of the titles and abstracts added to MEDLINE, the
for sherry to the semantic vector for wine. Next, shift the
largest existing repository of biomedical citation data, over
elemental vector for sherry right 22 positions and add this
the past decade by the SemRep system (see below). To
to the semantic vector for wine. The converse would be
achieve this end, we assign a sparse elemental vector and a
performed as described previously, but both the permuted
semantic vector to each unique concept extracted by
and unpermuted elemental vectors for wine would be
SemRep, and a sequential number to a set of predicate
added to the semantic vector for sherry. Encoding of
types SemRep recognizes. For example, the predicates
predicate-specific and general relatedness in this manner is
“TREATS”, “CAUSES” and “ISA” are assigned the
analogous to the encoding of “order-based” and “content-
numbers 38, 7, and 22 respectively. Rather than use
based” relatedness in approaches that capture the relative
positional shifting to encode the relative position of terms,
position of terms (Sahlgren, Holst and Kanerva 2008).
we use positional shifts to encode the type of predicate that links two concepts. Consequently each time the predication
“sherry ISA wine” occurs in the set of predications
The predications encoded by the PSI model are derived
extracted by SemRep, we shift the elemental vector for the
from the biomedical literature by the SemRep system.
concept “sherry” 22 positions to the right, to signify an
SemRep is a symbolic natural language processing system
ISA relationship. We then add this permuted elemental
that identifies semantic predications in biomedical text. For
vector to the semantic vector for “wine”. Conversely, we
example, SemRep extracts “Acetylcholine STIMULATES
shift the elemental vector for “wine” 22 positions to the
Nitric Oxide” from the sentence In humans, ACh evoked a
left, and add this permuted elemental vector to the
dose-dependent increase of NO levels in exhaled air.
semantic vector for “sherry”. Encoding predicate type in
SemRep is linguistically based and intensively depends on
this manner facilitates a form of predication-based retrieval
structured biomedical domain knowledge in the Unified
that is analogous to the order-based retrieval employed by
Medical Language System (UMLS SPECIALIST Lexicon,
Sahlrgren and his colleagues. For example, permuting the
Metathesaurus, Semantic Network (Bodenreider 2004)). At
elemental vector for “wine” 22 positions to the left
the core of SemRep processing is a partial syntactic
produces a sparse vector with the nearest neighboring
analysis in which simple noun phrases are enhanced with
semantic vectors and association strengths in Table 1 (left).
Metathesaurus concepts. Rules first link syntactic elements (such as verbs and nominalizations) to ontological
Table 1. Results of the predication-based queries “?
predicates in the Semantic Network and then find
ISA wine” (left) and “? ISA food” (right).
syntactically allowable noun phrases to serve as arguments.
A metarule relies on semantic classes associated with
Metathesaurus concepts to ensure that constraints enforced by the Semantic Network are satisfied.
SemRep provides underspecified interpretation for a
range of syntactic structures rather than detailed representation for a limited number of phenomena. Thirty
core predications in clinical medicine, genetic etiology of
disease, pharmacogenomics, and molecular biology are retrieved. Quantification, tense and modality, and predicates taking predicational arguments are not
Further details of the implementation of this model, and
addressed. The application has been used to extract
examples of the sorts of queries it enables can be found in
23,751,028 predication tokens from 6,964,326 MEDLINE
(Cohen, Schvaneveldt and Rindflesch 2009). For the
citations (with dates between 01/10/1999 and 03/31/2010).
purposes of this paper, we have modified the model in
Several evaluations of SemRep are reported in the
order to facilitate the recognition of terms that are
literature. For example, in Ahlers et al. (2004) .73
meaningfully connected by a bridging term. In PSI, each
precision and .55 recall (.63 f-score) resulted from a
unique predicate-concept pair is assigned a unique
reference standard of 850 predications in 300 sentences
(permuted) elemental vector. Consequently, the semantic
randomly selected from MEDLINE citations. Kilicoglu et
vectors for any two concepts should only be similar to one
al. (2010) report .75 precision and .64 recall (.69 f-score)
another if they occur in the same predication type with the
based on 569 predications annotated in 300 sentences from
same bridging concept (discounting unintended random
239 MEDLINE citations. Consequently, the set of
predications extracted by SemRep present a considerable
that have sufficient data points to generate meaningful
resource for biomedical knowledge discovery.
associations and eliminate concepts that carry little information content from the test set. We generate a 500 dimensional PSI space derived from all of the predications
Abduction in PSI-space
extracted by SemRep from citations added to MEDLINE over the past decade (n = 22,669.964), excluding negations
For the reasons described previously, the stepwise traversal
(x does_not_treat y). We also exclude any predication
of all concepts in predications with each middle term that
involving the predicate “PROCESS_OF”, as these are
occurs in a predicate with a cue concept is not plausible as
highly prevalent but tend to be uninformative (for example,
a computational model of abduction. Consequently, we
“tuberculosis PROCESS_OF patients”). For the same
have developed a model in which the search for a middle
reason, we exclude any concepts that occur more than
term is guided by an initial “logical leap” from cue concept
We then follow the procedure described previously,
taking the nearest neighboring semantic vector of each cue
Our model of abduction consists of the following three
concept, generating the vector average of these two
vectors, searching for the nearest elemental vector and
1. Identification of the nearest neighboring semantic
using the decoding process to find the predication that best
vector to the semantic vector of a concept of interest
links each pair of concepts (cue and middle term, and
2. Identification of a third “middle term” between the
target and middle term). We then evaluate these
cue concept and the nearest neighbor. This is
predications against the original database, to determine
accomplished by taking the normalized vector sum (or
whether these are accurate. Of the 1000 cue concepts it
vector average) of the semantic vectors for these two
was possible to evaluate 999, as one concept occurred in
concepts, and finding the most similar elemental vector.
predications that were not included in the model (such as
3. Decoding of the predicates that link the three concepts
PROCESS_OF) only. Of these 999 concepts, a legitimate
identified. For each pair of concepts, this is
target concept and middle term were identified for 962 of
accomplished by retrieving the elemental vector for one,
them, which can be considered as a precision of 0.963 if
and the semantic vector for the other, and shifting one of
retrieval of a set of accurate relationships from the
these by the number corresponding to each encoded
predication, to identify the predicate that fits best.
For example, the nearest neighboring semantic vector to that of “pastry” represents “rusk”. The nearest neighboring elemental vector to the vector average of these two semantic vectors is the elemental vector for “food”. Decoding these predicates retrieves the predication pair “rusk ISA food; pastry ISA food”.
Such "logical leaps" may correspond to an intuitive
sense of association in psychological terms. The underlying mechanism may involve associations arising from related patterns of associated neighbors rather than any direct association. These indirect associations are likely to be weaker than direct associations so detecting and reflecting on them may not occur without some effort directed toward searching for potential hypotheses,
Figure 1: Cosine association and accuracy
solutions, or discoveries. Psychological research has provided evidence that such associations occur in learning
Accurately retrieved results tended to have a higher cosine
and memory experiments (Dougher, et al., 1994, 2007;
association between the middle term and the vector
Sidman, 2000). Once detected, indirect associations could
average constructed from the cue concept and its nearest
be pursued in a more conscious/symbolic way to identify
neighboring semantic vector, as illustrated in Figure 1,
common neighbors or middle terms on the way to
which shows the number of accurate and inaccurate results
assessing the value of the indirect associations. Our
at different association strengths. Table 2 shows the five
computational methods can be seen as ways to simulate the
most strongly associated middle terms across this test set,
generation and evaluation of such potential discoveries.
together with the predicates linking them to the cue and
In order to evaluate the extent to which this approach
target concepts. In the first example, a
can be used to both identify and characterize the nature of
meaningful associations, we select at random 1000 UMLS
concepts extracted by SemRep from MEDLINE over the
past decade. We include only concepts that occur between
10 and 50,000 times in this dataset, to select for concepts
this research is to develop computational tools with which
scientists can explore the conceptual territory of their
domain of interest. Just as users of a vector-based
information retrieval system require methods through
which to direct their search for documents, there is a need for the development of methods through which a scientist
Table 2: “logical leaps”. Cue concepts are in bold, and
might further refine the search for new ideas.
nearest neighbors are underlined. cos = cosine. Quantum Operators in PSI Space
One potential solution to the problem of constraining
search is suggested by the analogy drawn between the
many senses of a term that may be captured by a term vector in geometric models of meaning, and the many
potential states of a particle that are represented by a state
vector in quantum mechanics (Widdows and Peters, 2003).
=
With respect to PSI, the semantic vector representing a concept can be viewed as a mixture of elemental vectors
representing each predicate-concept pair and concept it
&- >
occurs with. This analogy supports the application of the
operators of quantum logic, as described by Birkhoff and
von Neumann (Birkhoff and Von Neumann, 1936), to semantic vectors, resulting in the definition of semantic
space operators effecting quantum logical negation and
disjunction in semantic space (Widdows and Peters, 2003).
Negation Negation in semantic space involves eliminating an
undesired sense of a term by subtracting that component of
a term vector that is shared with a candidate term
representing the undesired sense. For example, the term
“pop” can be used to eliminate the musical sense of the
term “rock” (Widdows, 2004). This is accomplished by
projecting the vector for “rock” onto the vector for “pop”
(to identify the shared component), and subtracting this projection from the vector for “rock”. The resulting vector
will be orthogonal to the vector for “pop”, and as such will
not be strongly associated with vectors representing music-
related concepts that are similar to the vector for “pop”, but
will retain similarity to terms such as “limestone” that
represent the geological sense of “rock”.
A similar approach can be applied to the semantic
vectors generated using PSI, in order to direct the search
#
for related concepts away from a nearest neighbor that has
been identified. As is the case with terms, one would
anticipate this approach would eliminate not only the
specific concept concerned, but also a set of related
concepts. Specifically, we anticipate that this approach
would identify a new path involving a different middle
term (or group of terms), without the explicit identification
of the middle term to be avoided beforehand.
These examples illustrate the ability of vectors encoded
In order to evaluate the extent to which negation can be
using PSI to capture similarity between concepts linked by
used to identify new pathways in PSI space, we take the
a middle term without the need to explicitly retrieve this
same set of 1000 randomly selected concepts as cue
term. However, at times it may be of greater interest to
concepts. For each cue concept, we retrieve the vector for
explore some subset of this space, so as to retrieve
the concept (cue_concept), and the vector for the nearest
concepts linked by specific predicate types. One goal of
neighbor previously retrieved (nn_previous). We then use negation to extract the component of cue_concept that is
neighboring semantic vector to this combined vector
(nn_current). Finally, we take the vector average of
cue_concept and nn_current, render this orthogonal to
nn_previous using negation, and find the nearest
neighboring elemental vector to this combined vector. We
then decode the predicates concerned using the permutation operator as described previously.
To illustrate the utility of this approach, we present a series of examples in which we attempt logical leaps by applying
Table 3: Negation to identify new paths (n=997)
the dissection method to both the cue term and candidate
nearest neighbors. For example, consider a logical leap of
the form “X ISA Y; Y TREATS Z” where Z is the cue term. In the case of the cue term, we perform the reverse of
the “TREATS” permutation prior to dissection. For each
target term we perform the “ISA” permutation before dissection. After dissection, we measure the cosine between these transformed vectors to find a best match.
The results of this experiment are shown in Table 3. It was possible to obtain results for 997 of the set of 1000. One
Table 4: Leaps across specific predicates. * denotes
concept was excluded for the same reason as before, and
concepts that do not occur in a predicate with the cue.
another two were excluded as the negation operator
produced a zero vector, as these concepts occurred in predications exclusively with a single predicate-concept
pair. As anticipated, in every case negation eliminated the
concept represented by nn_previous. However, this result
could have been obtained using boolean negation, which is
the equivalent of simply selecting the next-nearest
neighbor, as we have done for comparison purposes.
Of greater interest is the extent to which the use of
quantum negation eliminates the path across a middle term that was used to identify a previous neighbor. This
occurred after quantum negation in 94.1% of cases, as
oppose to 27.7% in the case of boolean negation. A
concern with the use of this method is that the
orthogonalization process may introduce further errors as
concept vectors are distorted beyond recognition.
However, as shown in Table 3, this process led to only
slightly more erroneous predications than were obtained with boolean negation. Interestingly, the set of errors
* J " K; < 46 7(9: - K
produced in the original experiment has very few elements
in common with the set produced after quantum negation –
erroneous predications were produced for only four of the
Dual Dissection
We note that it is possible to select for particular predicate types by reversing the permutation operator that corresponds to the predicate of interest. For example, the
Table 4 illustrates some examples of dissection-based
predication A TREATS B is encoded by shifting the
searches. Nearest neighbors for each search are on the left,
elemental vector for A, Ae, 38 steps to the right, and
and the pattern of the strongest connection through a
adding this to the semantic vector for B. The unpermuted
middle term in each case is shown on the right. In each
vector, Ae, is also added to this vector. Applying the
case, the same pattern of strongest connections was shared
reverse shift to this semantic vector, to produce B^ should
by all five of the nearest neighbors shown, and corresponds
produce a vector that retains some remnant of the original
to the pattern specified using the dissection-based
Ae. As this remnant should be encoded in both the original
approach. In all cases, the five nearest neighbors are
semantic vector for B, and its permutation, B^, we attempt
different than those retrieved by a logical leap search
to extract the common components of these vectors using
without dissection, and in many cases (denoted by an *),
the following procedure, which we will term dissection:
the nearest neighbors are concepts that do not occur with the cue concept directly in any predication in the database.
These examples illustrate the way in which paired
permutations can be used to infer information beyond that
which is stated explicitly in the database. The system has
inferred plausible treatments for depressive disorders and
dysthymic disorder; and gene/protein-disease associations
related to prion diseases based on taxonomic relationships
extracted from the literature by SemRep. While these
examples illustrate only a few possibilities for the
application of dual dissection, it was not difficult to
generate others. We found that this approach frequently
results in logical leaps of the desired form. Pitfalls include
a tendency to generalize too generously (for example,
therapeutic associations involving high-level middle terms
such as “pharmaceutical preparation”), and failure to
isolate the desired predicate path. This was encountered
with terms that occur in many predication relationships. In
these cases, “correct” results would be interspersed
amongst results linked to the cue term in other ways. Dissection and Disjunction Once vectors representing the desired sense of a concept have been isolated using this procedure, it is possible to Conclusion
construct a subspace with these vectors as bases. This
In this paper, we develop and evaluate a model of
subspace then represents the set {sense1 OR sense2 OR …
automated reasoning based on “logical leaps”, in which
sense n} and can be modeled using quantum disjunction
meaningful associations between concepts derived from
(Widdows and Peters 2003), after ensuring the bases of the
distributional statistics are used to identify candidates for
subspace are orthogonal to one another using the Gram-
connection via a third concept, and identify the nature of
Schmidt procedure. The association strength between each
the relations involved. The chain of predicates constructed
semantic vector and this subspace can then be measured by
in this manner can subsequently be processed using
projecting a semantic vector into the subspace and
symbolic methods. Consequently, the vector-based “logical
measuring the cosine between the original semantic vector
leaps” approach relates to Gardenfors' proposal that
conceptual representation at a geometric level might
This allows us to broaden the scope of our search. For
provide support for symbolic level processes (Gardenfors
example, we might expand the query in Leap 1 to <
2000). While this approach is able to infer plausible
connections between concepts, this inference occurs at the
geometric level, avoiding the computational complexity of
extensive symbolic inference. Furthermore, the vector
spaces used for these experiments can be retained in RAM
to facilitate rapid, dynamic, interactive exploration of
biomedical concepts to support discovery. Vector operators derived from quantum logic show promise as a means to
direct such searches away from previously trodden paths,
and exploratory work suggests there may be ways to adapt
"
these operators to guide search toward conceptual territory
#"#
of interest. Of particular interest for future work is the evaluation of the extent to which these operators might be
used to model “discovery patterns” (Hristovski, Friedman
and Rindflesch 2008), combinations of predications that
have been shown useful for literature-based discovery.
References
. 6 >. @ = . '. (#@ . (. * .
@#'. O 8 . 6 )!, 7?
> . >. . & )!+, 8 P
? !& ". . 2#
> " . A. B & . R )25$, *
T . / )!5, - J
I ' .
> . 4 )!+, S ' *
>= . /. 6 . 8. . (. O > . E )!$,
33 %
>= . / (. : . (. O : . R )!$,
I * ( > J 7 . T A
C J C 7 $!%#$
> . 6. * . T. O *. T )2551,
6 . . . 8. O 8 . )!5,
8 . 6. O @ = . ' )!,
"
6 . . . 8. O : . ( )!2,
8 8 ? J
( . +. !. !+#!%$
( . ' R. . 7. ' " . ' 8. 4 % &
A .( 7. O : . 7 )255+,
:% ;4<=* 3235* 6 '%*
. 8. 6 . : . T /
& . $!. 2#%2
. 8 . O 6 . )!2,
& .$$. 25# /" . & K "
( . @ . 8 . 6 >. O ( . )255+, A #
7 . 8. ' . 251$ /$ ,-
& . %&. 2!#2+$
- . . 6 . 7. * . R 7. - = . '.
: . (. 6 . )!5, B
8 . /. ' . R *. > . ( 8. )!5, *
% (& . 0)2!,.
: . (. O / . )!, : B
1 * <* *
: . ( )!+, 8 6
) , 1 2$ ' & > J #B .
R . ' &. O ' . ( R T )!, 8
Biomedical Research 2011; 22 (2): 125-129 Banaba: The natural remedy as antidiabetic drug Cheolin Park1 and Jae-Sik Lee2 1Wellness banaba Co. Ltd. 864-1 Janghang-dong, Ilsandong-gu, Goyang-si, Gyeonggi-do KOREA 410-380 2Department of Clinical Laboratory Science, Hyejeon College, San 16, Namjang-ri, Hongseong-eup, Hongseong-gun, Chungcheongnam-do Korea 350-702 Abstract Banaba ( L