全文搜索引擎——Solr

Posted 晨M风

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了全文搜索引擎——Solr相关的知识,希望对你有一定的参考价值。

1.部署solr

  a.下载并解压Solr

  b.导入项目(独立项目):

    将解压后的 server\\solr-webapp 下的 webapp文件夹 拷贝到tomcat的webapps下,并重命名为 solr

 

  c.加入jar包 及 log4j配置:

    将解压后的 server\\lib\\ext 下的所有 jar包 拷贝到 tomcat下solr项目的 lib中

    将解压后的 server\\lib 下的所有 metrics 开头的 jar包 拷贝到 tomcat下solr项目的 lib中

    将解压后的 dist 下的 solr-dataimporthandler-7.2.1.jar 和 solr-dataimporthandler-extras-7.2.1.jar 也拷贝到 tomcat下solr项目的 lib中

    将解压后的 server\\resources 下的  log4j.properties  拷贝到 tomcat下solr项目的 WEB-INF\\classes 中

 

  d.创建solrHome

    将解压的 server 下的 solr 文件夹 拷贝出来(根据自己情况拷贝到常用工具文件夹中),并重命名(solrHome7\\solrHome)

 

  e.修改web.xml 指向 solrHome

    修改tomcat下solr项目中的web.xml,并注释掉<security-constraint>

    <env-entry>
       <env-entry-name>solr/home</env-entry-name>
       <env-entry-value>D:\\JavaTools\\solrHome7\\solrHome</env-entry-value>
       <env-entry-type>java.lang.String</env-entry-type>
    </env-entry>
  <!-- Get rid of error message -->
  <!-- <security-constraint>
    <web-resource-collection>
      <web-resource-name>Disable TRACE</web-resource-name>
      <url-pattern>/</url-pattern>
      <http-method>TRACE</http-method>
    </web-resource-collection>
    <auth-constraint/>
  </security-constraint>
  <security-constraint>
    <web-resource-collection>
      <web-resource-name>Enable everything but TRACE</web-resource-name>
      <url-pattern>/</url-pattern>
      <http-method-omission>TRACE</http-method-omission>
    </web-resource-collection>
  </security-constraint> -->

 

  f.现在就可以开启tomcat 访问 localhost:8080/solr/index.html 打开Solr管理界面了

 

2.solr 配置数据库连接导入数据

  a.将解压后的 contrib 和 dist 两文件夹 拷贝到 solrHome7 下与 solrHome 放在一起

  b.在 contrib\\dataimporthandler 下新建 lib 文件夹,并将 solr-dataimporthandler-7.2.1.jar (上文1-c中jar包)拷入其中

  c.在 contrib 下新建 bd 文件夹,再在bd文件中新建 lib 文件夹,并将 常规数据库连接包(mysql-connector-java-5.1.39-bin.jar)拷入其中

  d.新建core1实例

    在solrHome7\\solrHome 中新建文件夹 core1 文件夹

    将solrHome7\\solrHome/configsets/sample_techproducts_configs 下的 conf 文件夹 复制到solrHome7\\solrHome\\core1中

  e.在 conf 文件夹中新建 data-config.xml

<dataConfig>
    <!-- 这是mysql的配置 -->
    <dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/blog" user="root" password="1452"/>
    <document>
        <!-- name属性,就代表着一个文档,可以随便命名 -->
        <!-- query是一条sql,代表在数据库查找出来的数据 -->
        <entity name="article" query="select * from article">
            <!-- 每一个field映射着数据库中列与文档中的域,column是数据库列,name是solr的域(必须是在managed-schema文件中配置过的域才行) -->
            <field column="article_id" name="article_id"/>
            <field column="article_title" name="article_title"/>
            <field column="article_content" name="article_content"/>
            <field column="user_id" name="user_id"/>
            <field column="article_type" name="article_type"/>
            <field column="edit_date" name="edit_date"/>
            <field column="read_num" name="read_num"/>
            <field column="article_summary" name="article_summary"/>
        </entity>
    </document>
</dataConfig>

 

  f.在 conf 文件夹中修改 solrconfig.xml

  <lib dir="${solr.install.dir:../../../..}/contrib/dataimporthandler/lib" regex=".*\\.jar" />
  <lib dir="${solr.install.dir:../../../..}/contrib/db/lib" regex=".*\\.jar" />
    <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
      <lst name="defaults">
            <str name="config">data-config.xml</str>
      </lst>
    </requestHandler>

 

  g.在 conf 文件夹中修改 managed-schema

<?xml version="1.0" encoding="UTF-8" ?>
<!--
 Licensed to the Apache Software Foundation (ASF) under one or more
 contributor license agreements.  See the NOTICE file distributed with
 this work for additional information regarding copyright ownership.
 The ASF licenses this file to You under the Apache License, Version 2.0
 (the "License"); you may not use this file except in compliance with
 the License.  You may obtain a copy of the License at

     http://www.apache.org/licenses/LICENSE-2.0

 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
-->

<!--  
 This is the Solr schema file. This file should be named "schema.xml" and
 should be in the conf directory under the solr home
 (i.e. ./solr/conf/schema.xml by default) 
 or located where the classloader for the Solr webapp can find it.

 This example schema is the recommended starting point for users.
 It should be kept correct and concise, usable out-of-the-box.

 For more information, on how to customize this file, please see
 http://wiki.apache.org/solr/SchemaXml

 PERFORMANCE NOTE: this schema includes many optional features and should not
 be used for benchmarking.  To improve performance one could
  - set stored="false" for all fields possible (esp large fields) when you
    only need to search on the field but don\'t need to return the original
    value.
  - set indexed="false" if you don\'t need to search on the field, but only
    return the field as a result of searching on other indexed fields.
  - remove all unneeded copyField statements
  - for best index size and searching performance, set "index" to false
    for all general text fields, use copyField to copy them to the
    catchall "text" field, and use that for searching.
  - For maximum indexing performance, use the ConcurrentUpdateSolrServer
    java client.
  - Remember to run the JVM in server mode, and use a higher logging level
    that avoids logging every request
-->

<schema name="example" version="1.6">
    <uniqueKey>article_id</uniqueKey>
    
    <field name="article_id" type="string" multiValued="false" indexed="true" required="true" stored="true"/>
    <field name="article_title" type="string" indexed="true" stored="true"/>
    <field name="article_content" type="text_general" indexed="true" stored="true"/>
    <field name="user_id" type="string" indexed="true" stored="true"/>
    <field name="article_type" type="pint" indexed="true" stored="true"/>
    <field name="edit_date" type="pdate" indexed="true" stored="true"/>
    <field name="read_num" type="pint" indexed="true" stored="true"/>
    <field name="article_summary" type="string" indexed="true" stored="true"/>
  
    <field name="_version_" type="plong" indexed="true" stored="true"/>
    <field name="_root_" type="string" indexed="true" stored="false" docValues="false" />
    <field name="_text_" type="text_general" indexed="true" stored="false" multiValued="true"/>
    
    <field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>

    <!-- This can be enabled, in case the client does not know what fields may be searched. It isn\'t enabled by default
         because it\'s very expensive to index everything twice. -->
    <!-- <copyField source="*" dest="_text_"/> -->

    <!-- Dynamic field definitions allow using convention over configuration
       for fields via the specification of patterns to match field names.
       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
       RESTRICTION: the glob-like pattern in the name attribute must have a "*" only at the start or the end.  -->
   
    <dynamicField name="*_i"  type="pint"    indexed="true"  stored="true"/>
    <dynamicField name="*_is" type="pints"    indexed="true"  stored="true"/>
    <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
    <dynamicField name="*_ss" type="strings"  indexed="true"  stored="true"/>
    <dynamicField name="*_l"  type="plong"   indexed="true"  stored="true"/>
    <dynamicField name="*_ls" type="plongs"   indexed="true"  stored="true"/>
    <dynamicField name="*_txt" type="text_general" indexed="true" stored="true"/>
    <dynamicField name="*_b"  type="boolean" indexed="true" stored="true"/>
    <dynamicField name="*_bs" type="booleans" indexed="true" stored="true"/>
    <dynamicField name="*_f"  type="pfloat"  indexed="true"  stored="true"/>
    <dynamicField name="*_fs" type="pfloats"  indexed="true"  stored="true"/>
    <dynamicField name="*_d"  type="pdouble" indexed="true"  stored="true"/>
    <dynamicField name="*_ds" type="pdoubles" indexed="true"  stored="true"/>

    <!-- Type used for data-driven schema, to add a string copy for each text field -->
    <dynamicField name="*_str" type="strings" stored="false" docValues="true" indexed="false" />

    <dynamicField name="*_dt"  type="pdate"    indexed="true"  stored="true"/>
    <dynamicField name="*_dts" type="pdate"    indexed="true"  stored="true" multiValued="true"/>
    <dynamicField name="*_p"  type="location" indexed="true" stored="true"/>
    <dynamicField name="*_srpt"  type="location_rpt" indexed="true" stored="true"/>
    
    <!-- payloaded dynamic fields -->
    <dynamicField name="*_dpf" type="delimited_payloads_float" indexed="true"  stored="true"/>
    <dynamicField name="*_dpi" type="delimited_payloads_int" indexed="true"  stored="true"/>
    <dynamicField name="*_dps" type="delimited_payloads_string" indexed="true"  stored="true"/>

    <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>

    <!-- Field to use to determine and enforce document uniqueness.
      Unless this field is marked with required="false", it will be a required field
    -->

    <!-- copyField commands copy one field to another at the time a document
       is added to the index.  It\'s used either to index the same field differently,
       or to add multiple fields to the same field for easier/faster searching.

    <copyField source="sourceFieldName" dest="destinationFieldName"/>
    -->

    <!-- field type definitions. The "name" attribute is
       just a label to be used by field definitions.  The "class"
       attribute and any other attributes determine the real
       behavior of the fieldType.
         Class names starting with "solr" refer to java classes in a
       standard package such as org.apache.solr.analysis
    -->

    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
         currently supported on types that are sorted internally as strings
         and on numeric types.
       This includes "string", "boolean", "pint", "pfloat", "plong", "pdate", "pdouble".
       - If sortMissingLast="true", then a sort on this field will cause documents
         without the field to come after documents with the field,
         regardless of the requested sort order (asc or desc).
       - If sortMissingFirst="true", then a sort on this field will cause documents
         without the field to come before documents with the field,
         regardless of the requested sort order.
       - If sortMissingLast="false" and sortMissingFirst="false" (the default),
         then default lucene sorting will be used which places docs without the
         field first in an ascending sort and last in a descending sort.
    -->

    <!-- The StrField type is not analyzed, but indexed/stored verbatim. -->
    <fieldType name="string" class="solr.StrField" sortMissingLast="true" docValues="true" />
    <fieldType name="strings" class="solr.StrField" sortMissingLast="true" multiValued="true" docValues="true" />

    <!-- boolean type: "true" or "false" -->
    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
    <fieldType name="booleans" class="solr.BoolField" sortMissingLast="true" multiValued="true"/>

    <!--
      Numeric field types that index values using KD-trees.
      Point fields don\'t support FieldCache, so they must have docValues="true" if needed for sorting, faceting, functions, etc.
    -->
    <fieldType name="pint" class="solr.IntPointField" docValues="true"/>
    <fieldType name="pfloat" class="solr.FloatPointField" docValues="true"/>
    <fieldType name="plong" class="solr.LongPointField" docValues="true"/>
    <fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/>
    
    <fieldType name="pints" class="solr.IntPointField" docValues="true" multiValued="true"/>
    <fieldType name="pfloats" class="solr.FloatPointField" docValues="true" multiValued="true"/>
    <fieldType name="plongs" class="solr.LongPointField" docValues="true" multiValued="true"/>
    <fieldType name="pdoubles" class="solr.DoublePointField" docValues="true" multiValued="true"/>

    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
         is a more restricted form of the canonical representation of dateTime
         http://www.w3.org/TR/xmlschema-2/#dateTime    
         The trailing "Z" designates UTC time and is mandatory.
         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
         All other components are mandatory.

         Expressions can also be used to denote calculations that should be
         performed relative to "NOW" to determine the value, ie...

               NOW/HOUR
                  ... Round to the start of the current hour
               NOW-1DAY
                  ... Exactly 1 day prior to now
               NOW/DAY+6MONTHS+3DAYS
                  ... 6 months and 3 days in the future from the start of
                      the current day
                      
      -->
    <!-- KD-tree versions of date fields -->
    <fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
    <fieldType name="pdates" class="solr.DatePointField" docValues="true" multiValued="true"/>
    
    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
    <fieldType name="binary" class="solr.BinaryField"/>

    <!-- solr.TextField allows the specification of custom text analyzers
         specified as a tokenizer and a list of token filters. Different
         analyzers may be specified for indexing and querying.

         The optional positionIncrementGap puts space between multiple fields of
         this type on the same document, with the purpose of preventing false phrase
         matching across fields.

         For more info on customizing your analyzer chain, please see
         http://lucene.apache.org/solr/guide/understanding-analyzers-tokenizers-and-filters.html#understanding-analyzers-tokenizers-and-filters
     -->

    <!-- One can also specify an existing Analyzer class that has a
         default constructor via the class attribute on the analyzer element.
         Example:
    <fieldType name="text_greek" class="solr.TextField">
      <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
    </fieldType>
    -->

    <!-- A text field that only splits on whitespace for exact matching of words -->
    <dynamicField name="*_ws" type="text_ws"  indexed="true"  stored="true"/>
    <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
      <analyzer>
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
      </analyzer>
    </fieldType>

    <!-- A general text field that has reasonable, generic
         cross-language defaults: it tokenizes with StandardTokenizer,
           removes stop words from case-insensitive "stopwords.txt"
           (empty by default), and down cases.  At query time only, it
           also applies synonyms.
      -->
    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">
      <analyzer type="index">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
        <!-- in this example, we will only use synonyms at query time
        <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        <filter class="solr.FlattenGraphFilterFactory"/>
        -->
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
        <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>

    <!-- A text field with defaults appropriate for English: it tokenizes with StandardTokenizer,
         removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
         finally applies Porter\'s stemming.  The query time analyzer also applies synonyms from synonyms.txt. -->
    <dynamicField name="*_txt_en" type="text_en"  indexed="true"  stored="true"/>
    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <!-- in this example, we will only use synonyms at query time
        <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        <filter class="solr.FlattenGraphFilterFactory"/>
        -->
        <!-- Case insensitive stop word removal.
        -->
        <filter class="solr.StopFilterFactory"
                ignoreCase="true"
                words="lang/stopwords_en.txt"
            />
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.EnglishPossessiveFilterFactory"/>以上是关于全文搜索引擎——Solr的主要内容,如果未能解决你的问题,请参考以下文章

Solr_全文检索引擎系统

lucene和solr的区别

solr和lucene是什么关系

#徐员外#Solr搜索 代码实现

全文搜索引擎 ElasticSearch 还是 Solr?

全文搜索引擎 ElasticSearch 还是 Solr?