一、准备数据
1.1 学生相关表
创建学生表、学生专业关联表、专业表、学生行业关联表、行业表、基础信息表,并创建一条小白的信息。由于navicat收费,所以这里利用HeidiSQL连接本地的MySql建立表。
1.2查询数据
查询出要导入solr的数据
二、添加jar包
2.1 添加mysql数据库驱动包
下载jar包,并放入到../solr-7.7.2/server/solr-webapp/webapp/WEB-INF/lib中。
http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.34/
2.2 添加solr索引导入包
从../dist目录复制solr-dataimporthandler-7.7.2、solr-dataimporthandler-extras-7.7.2两个jar包到../solr-7.7.2/server/solr-webapp/webapp/WEB-INF/lib
复制到:
三、修改配置
3.1 添加data-config.xml文件
在core1/conf目录下添加data-config.xml文件,内容为:
<?xml version="1.0" encoding="UTF-8" ?><dataConfig> <dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://192.168.33.95:3306/solr" user="root" password="123456" /> <document name="testDoc"> <entity name="tj_student" query="SELECT ts.*, tf.field_name, ti.industry_name FROM tj_student ts LEFT JOIN tj_user_field tuf ON ts.id=tuf.student_id LEFT JOIN tj_field tf ON tuf.user_field_id=tf.id LEFT JOIN tj_user_industry tui ON ts.id=tui.student_id LEFT JOIN tj_industry ti ON tui.industry_id=ti.id"> <entity name="user_info" query="SELECT * FROM tj_user_info WHERE id=${tj_student.id}"> entity> entity> document>dataConfig>
View Code
3.2 修改solrconfig.xml文件
在core1/conf目录下修改solrconfig.xml文件,添加内容:
<?xml version="1.0" encoding="UTF-8" ?> Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.--> For more details about configurations options that may appear in this file, see http://wiki.apache.org/solr/SolrConfigXml.--><config> In all configuration below, a prefix of "solr." for class names is an alias that causes solr to search appropriate packages, including org.apache.solr.(search|update|request|core|analysis) You may also specify a fully qualified Java classname if you have your own custom plugins. --> Controls what version of Lucene various components of Solr adhere to. Generally, you want to use the latest version to get all bug fixes and improvements. It is highly recommended that you fully re-index after changing this setting as it can affect both how text is indexed and queried. --> <luceneMatchVersion>7.7.2luceneMatchVersion> directives can be used to instruct Solr to load any Jars identified and use them to resolve any "plugins" specified in your solrconfig.xml or schema.xml (ie: Analyzers, Request Handlers, etc...). All directories and paths are resolved relative to the instanceDir. Please note thatdirectives are processed in the order that they appear in your solrconfig.xml file, and are "stacked" on top of each other when building a ClassLoader - so if you have plugin jars with dependencies on other jars, the "lower level" dependency jars should be loaded first. If a "./lib" directory exists in your instanceDir, all files found in it are included as if you had used the following syntax...--> A 'dir' option by itself adds any files found in the directory to the classpath, this is useful for including all jars in a directory. When a 'regex' is specified in addition to a 'dir', only the files in that directory which completely match the regex (anchored on both ends) will be included. If a 'dir' option (with or without a regex) is used and nothing is found that matches, a warning will be logged. The examples below can be used to load some solr-contribs along with their external dependencies. --> <lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" /> <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-cell-\d.*\.jar" /> <lib dir="${solr.install.dir:../../../..}/contrib/clustering/lib/" regex=".*\.jar" /> <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-clustering-\d.*\.jar" /> <lib dir="${solr.install.dir:../../../..}/contrib/langid/lib/" regex=".*\.jar" /> <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-langid-\d.*\.jar" /> <lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" /> <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" /> an exact 'path' can be used instead of a 'dir' to specify a specific jar file. This will cause a serious error to be logged if it can't be loaded. --> --> Data Directory Used to specify an alternate directory to hold all index data other than the default ./data under the Solr home. If replication is in use, this should match the replication configuration. --> <dataDir>${solr.data.dir:}dataDir> The DirectoryFactory to use for indexes. solr.StandardDirectoryFactory is filesystem based and tries to pick the best implementation for the current JVM and platform. solr.NRTCachingDirectoryFactory, the default, wraps solr.StandardDirectoryFactory and caches small files in memory for better NRT performance. One can force a particular implementation via solr.MMapDirectoryFactory, solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory. solr.RAMDirectoryFactory is memory based and not persistent. --> <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/> The CodecFactory for defining the format of the inverted index. The default implementation is SchemaCodecFactory, which is the official Lucene index format, but hooks into the schema to provide per-field customization of the postings lists and per-document values in the fieldType element (postingsFormat/docValuesFormat). Note that most of the alternative implementations are experimental, so if you choose to customize the index format, it's a good idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader) before upgrading to a newer version to avoid unnecessary reindexing. A "compressionMode" string element can be added toto choose between the existing compression modes in the default codec: "BEST_SPEED" (default) or "BEST_COMPRESSION". --> <codecFactory class="solr.SchemaCodecFactory"/> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Index Config - These settings control low-level behavior of indexing Most example settings here show the default value, but are commented out, to more easily see where customizations have been made. Note: This replacesandfrom older versions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> <indexConfig> maxFieldLength was removed in 4.0. To get similar behavior, include a LimitTokenCountFilterFactory in your fieldType definition. E.g.--> Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 --> 1000--> Expert: Enabling compound file will use less files for the index, using fewer file descriptors on the expense of performance decrease. Default in Lucene is "true". Default in Solr is "false" (since 3.6) --> false--> ramBufferSizeMB sets the amount of RAM that may be used by Lucene indexing for buffering added documents and deletions before they are flushed to the Directory. maxBufferedDocs sets a limit on the number of documents buffered before flushing. If both ramBufferSizeMB and maxBufferedDocs is set, then Lucene will flush based on whichever limit is hit first. --> 100--> 1000--> Expert: Merge Policy The Merge Policy in Lucene controls how merging of segments is done. The default since Solr/Lucene 3.3 is TieredMergePolicy. The default since Lucene 2.3 was the LogByteSizeMergePolicy, Even older versions of Lucene used LogDocMergePolicy. --> 10100.1--> Expert: Merge Scheduler The Merge Scheduler in Lucene controls how merges are performed. The ConcurrentMergeScheduler (Lucene 2.3 default) can perform merges in the background using separate threads. The SerialMergeScheduler (Lucene 2.2 default) does not. --> --> LockFactory This option specifies which Lucene LockFactory implementation to use. single = SingleInstanceLockFactory - suggested for a read-only index or when there is no possibility of another process trying to modify the index. native = NativeFSLockFactory - uses OS native file locking. Do not use when multiple solr webapps in the same JVM are attempting to share a single index. simple = SimpleFSLockFactory - uses a plain file for locking Defaults: 'native' is default for Solr3.6 and later, otherwise 'simple' is the default More details on the nuances of each LockFactory... http://wiki.apache.org/lucene-java/AvailableLockFactories --> <lockType>${solr.lock.type:native}lockType> Commit Deletion Policy Custom deletion policies can be specified here. The class must implement org.apache.lucene.index.IndexDeletionPolicy. The default Solr IndexDeletionPolicy implementation supports deleting index commit points on number of commits, age of commit point and optimized status. The latest commit point should always be preserved regardless of the criteria. --> --> The number of commit points to be kept --> 1--> The number of optimized commit points to be kept --> 0--> Delete all commit points once they have reached the given age. Supports DateMathParser syntax e.g. --> 30MINUTES1DAY--> --> Lucene Infostream To aid in advanced debugging, Lucene provides an "InfoStream" of detailed information when indexing. Setting The value to true will instruct the underlying Lucene IndexWriter to write its debugging info the specified file --> false--> indexConfig> JMX This example enables JMX if and only if an existing MBeanServer is found, use this if you want to configure JMX through JVM parameters. Remove this to disable exposing Solr configuration and statistics to JMX. For more details see http://wiki.apache.org/solr/SolrJmx --> <jmx /> If you want to connect to a particular server, specify the agentId --> --> If you want to start a new MBeanServer, specify the serviceUrl --> --> The default high-performance update handler --> <updateHandler class="solr.DirectUpdateHandler2"> Enables a transaction log, used for real-time get, durability, and and solr cloud replica recovery. The log can grow as big as uncommitted changes to the index, so use of a hard autoCommit is recommended (see below). "dir" - the target directory for transaction logs, defaults to the solr data directory. "numVersionBuckets" - sets the number of buckets used to keep track of max version values when checking for re-ordered updates; increase this value to reduce the cost of synchronizing access to version buckets during high-volume indexing, this requires 8 bytes (long) * numVersionBuckets of heap space per Solr core. --> <updateLog> <str name="dir">${solr.ulog.dir:}str> <int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}int> updateLog> AutoCommit Perform a hard commit automatically under certain conditions. Instead of enabling autoCommit, consider using "commitWithin" when adding documents. http://wiki.apache.org/solr/UpdateXmlMessages maxDocs - Maximum number of documents to add since the last commit before automatically triggering a new commit. maxTime - Maximum amount of time in ms that is allowed to pass since a document was added before automatically triggering a new commit. openSearcher - if false, the commit causes recent index changes to be flushed to stable storage, but does not cause a new searcher to be opened to make those changes visible. If the updateLog is enabled, then it's highly recommended to have some sort of hard autoCommit to limit the log size. --> <autoCommit> <maxTime>${solr.autoCommit.maxTime:15000}maxTime> <openSearcher>falseopenSearcher> autoCommit> softAutoCommit is like autoCommit except it causes a 'soft' commit which only ensures that changes are visible but does not ensure that data is synced to disk. This is faster and more near-realtime friendly than a hard commit. --> <autoSoftCommit> <maxTime>${solr.autoSoftCommit.maxTime:-1}maxTime> autoSoftCommit> Update Related Event Listeners Various IndexWriter related events can trigger Listeners to take actions. postCommit - fired after every commit or optimize command postOptimize - fired after every optimize command --> updateHandler> IndexReaderFactory Use the following format to specify a custom IndexReaderFactory, which allows for alternate IndexReader implementations. ** Experimental Feature ** Please note - Using a custom IndexReaderFactory may prevent certain other features from working. The API to IndexReaderFactory may change without warning or may even be removed from future releases if the problems cannot be resolved. ** Features that may not work with custom IndexReaderFactory ** The ReplicationHandler assumes a disk-resident index. Using a custom IndexReader implementation may cause incompatibility with ReplicationHandler and may cause replication to not work correctly. See SOLR-1366 for details. --> Some Value--> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Query section - these settings control query time things like caches ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> <query> Maximum number of clauses in each BooleanQuery, an exception is thrown if exceeded. It is safe to increase or remove this setting, since it is purely an arbitrary limit to try and catch user errors where large boolean queries may not be the best implementation choice. --> <maxBooleanClauses>${solr.max.booleanClauses:1024}maxBooleanClauses> Solr Internal Query Caches There are two implementations of cache available for Solr, LRUCache, based on a synchronized LinkedHashMap, and FastLRUCache, based on a ConcurrentHashMap. FastLRUCache has faster gets and slower puts in single threaded operation and thus is generally faster than LRUCache when the hit ratio of the cache is high (> 75%), and may be faster under other scenarios on multi-cpu systems. --> Filter Cache Cache used by SolrIndexSearcher for filters (DocSets), unordered sets of *all* documents that match a query. When a new searcher is opened, its caches may be prepopulated or "autowarmed" using data from caches in the old searcher. autowarmCount is the number of items to prepopulate. For LRUCache, the autowarmed items will be the most recently accessed items. Parameters: class - the SolrCache implementation LRUCache or (LRUCache or FastLRUCache) size - the maximum number of entries in the cache initialSize - the initial capacity (number of entries) of the cache. (see java.util.HashMap) autowarmCount - the number of entries to prepopulate from and old cache. maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed to occupy. Note that when this option is specified, the size and initialSize parameters are ignored. --> <filterCache class="solr.FastLRUCache" size="512" initialSize="512" autowarmCount="0"/> Query Result Cache Caches results of searches - ordered lists of document ids (DocList) based on a query, a sort, and the range of documents requested. Additional supported parameter by LRUCache: maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed to occupy --> <queryResultCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="0"/> Document Cache Caches Lucene Document objects (the stored fields for each document). Since Lucene internal document ids are transient, this cache will not be autowarmed. --> <documentCache class="solr.LRUCache" size="512" initialSize="512" autowarmCount="0"/> custom cache currently used by block join --> <cache name="perSegFilter" class="solr.search.LRUCache" size="10" initialSize="0" autowarmCount="10" regenerator="solr.NoOpRegenerator" /> Field Value Cache Cache used to hold field values that are quickly accessible by document id. The fieldValueCache is created by default even if not configured here. --> --> Custom Cache Example of a generic cache. These caches may be accessed by name through SolrIndexSearcher.getCache(),cacheLookup(), and cacheInsert(). The purpose is to enable easy caching of user/application level data. The regenerator argument should be specified as an implementation of solr.CacheRegenerator if autowarming is desired. --> --> Lazy Field Loading If true, stored fields that are not requested will be loaded lazily. This can result in a significant speed improvement if the usual case is to not load all stored fields, especially if the skipped fields are large compressed text fields. --> <enableLazyFieldLoading>trueenableLazyFieldLoading> Use Filter For Sorted Query A possible optimization that attempts to use a filter to satisfy a search. If the requested sort does not include score, then the filterCache will be checked for a filter matching the query. If found, the filter will be used as the source of document ids, and then the sort will be applied to that. For most situations, this will not be useful unless you frequently get the same search repeatedly with different sort options, and none of them ever use "score" --> true--> Result Window Size An optimization for use with the queryResultCache. When a search is requested, a superset of the requested number of document ids are collected. For example, if a search for a particular query requests matching documents 10 through 19, and queryWindowSize is 50, then documents 0 through 49 will be collected and cached. Any further requests in that range can be satisfied via the cache. --> <queryResultWindowSize>20queryResultWindowSize> Maximum number of documents to cache for any entry in the queryResultCache. --> <queryResultMaxDocsCached>200queryResultMaxDocsCached> Query Related Event Listeners Various IndexSearcher related events can trigger Listeners to take actions. newSearcher - fired whenever a new searcher is being prepared and there is a current searcher handling requests (aka registered). It can be used to prime certain caches to prevent long request times for certain requests. firstSearcher - fired whenever a new searcher is being prepared but there is no current registered searcher to handle requests or to gain autowarming data from. --> QuerySenderListener takes an array of NamedList and executes a local query request for each NamedList in sequence. --> <listener event="newSearcher" class="solr.QuerySenderListener"> <arr name="queries"> solrprice ascrocksweight asc--> arr> listener> <listener event="firstSearcher" class="solr.QuerySenderListener"> <arr name="queries"> static firstSearcher warming in solrconfig.xml--> arr> listener> Use Cold Searcher If a search request comes in and there is no current registered searcher, then immediately register the still warming searcher and use it. If "false" then all requests will block until the first searcher is done warming. --> <useColdSearcher>falseuseColdSearcher> query> Request Dispatcher This section contains instructions for how the SolrDispatchFilter should behave when processing requests for this SolrCore. --> <requestDispatcher> Request Parsing These settings indicate how Solr Requests may be parsed, and what restrictions may be placed on the ContentStreams from those requests enableRemoteStreaming - enables use of the stream.file and stream.url parameters for specifying remote streams. multipartUploadLimitInKB - specifies the max size (in KiB) of Multipart File Uploads that Solr will allow in a Request. formdataUploadLimitInKB - specifies the max size (in KiB) of form data (application/x-www-form-urlencoded) sent via POST. You can use POST to pass request parameters not fitting into the URL. addHttpRequestToContext - if set to true, it will instruct the requestParsers to include the original HttpServletRequest object in the context map of the SolrQueryRequest under the key "httpRequest". It will not be used by any of the existing Solr components, but may be useful when developing custom plugins. *** WARNING *** Before enabling remote streaming, you should make sure your system has authentication enabled.--> HTTP Caching Set HTTP caching related parameters (for proxy caches and clients). The options below instruct Solr not to output any HTTP Caching related headers --> <httpCaching never304="true" /> If you include adirective, it will be used to generate a Cache-Control header (as well as an Expires header if the value contains "max-age=") By default, no Cache-Control header is generated. You can use theoption even if you have set never304="true" --> max-age=30, public--> To enable Solr to respond with automatically generated HTTP Caching headers, and to response to Cache Validation requests correctly, set the value of never304="false" This will cause Solr to generate Last-Modified and ETag headers based on the properties of the Index. The following options can also be specified to affect the values of these headers... lastModFrom - the default value is "openTime" which means the Last-Modified value (and validation against If-Modified-Since requests) will all be relative to when the current Searcher was opened. You can change it to lastModFrom="dirLastMod" if you want the value to exactly correspond to when the physical index was last modified. etagSeed="..." is an option you can change to force the ETag header (and validation against If-None-Match requests) to be different even if the index has not changed (ie: when making significant changes to your config file) (lastModifiedFrom and etagSeed are both ignored if you use the never304="true" option) --> max-age=30, public--> requestDispatcher> Request Handlers http://wiki.apache.org/solr/SolrRequestHandler Incoming queries will be dispatched to a specific handler by name based on the path specified in the request. If a Request Handler is declared with startup="lazy", then it will not be initialized until the first request that uses it. --> SearchHandler http://wiki.apache.org/solr/SearchHandler For processing Search Queries, the primary Request Handler provided with Solr is "SearchHandler" It delegates to a sequent of SearchComponents (see below) and supports distributed queries across multiple shards --> <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler"> <lst name="defaults"> <str name="config">data-config.xmlstr> lst> requestHandler> <requestHandler name="/select" class="solr.SearchHandler"> default values for query parameters can be specified, these will be overridden by parameters in the request --> <lst name="defaults"> <str name="echoParams">explicitstr> <int name="rows">10int> Default search fieldtext--> Change from JSON to XML format (the default prior to Solr 7.0)xml--> lst> In addition to defaults, "appends" params can be specified to identify values which should be appended to the list of multi-val params from the query (or the existing "defaults"). --> In this example, the param "fq=instock:true" would be appended to any query time fq params the user may specify, as a mechanism for partitioning the index, independent of any user selected filtering that may also be desired (perhaps as a result of faceted searching). NOTE: there is *absolutely* nothing a client can do to prevent these "appends" values from being used, so don't use this mechanism unless you are sure you always want it. --> inStock:true--> "invariants" are a way of letting the Solr maintainer lock down the options available to Solr clients. Any params values specified here are used regardless of what values may be specified in either the query, the "defaults", or the "appends" params. In this example, the facet.field and facet.query params would be fixed, limiting the facets clients can use. Faceting is not turned on by default - but if the client does specify facet=true in the request, these are the only facets they will be able to see counts for; regardless of what other facet.field or facet.query params they may specify. NOTE: there is *absolutely* nothing a client can do to prevent these "invariants" values from being used, so don't use this mechanism unless you are sure you always want it. --> catmanu_exactprice:[* TO 500]price:[500 TO *]--> If the default list of SearchComponents is not desired, that list can either be overridden completely, or components can be prepended or appended to the default list. (see below) --> nameOfCustomComponent1nameOfCustomComponent2--> requestHandler> A request handler that returns indented JSON by default --> <requestHandler name="/query" class="solr.SearchHandler"> <lst name="defaults"> <str name="echoParams">explicitstr> <str name="wt">jsonstr> <str name="indent">truestr> lst> requestHandler> A Robust Example This example SearchHandler declaration shows off usage of the SearchHandler with many defaults declared Note that multiple instances of the same Request Handler (SearchHandler) can be registered multiple times with different names (and different init parameters) --> <requestHandler name="/browse" class="solr.SearchHandler" useParams="query,facets,velocity,browse"> <lst name="defaults"> <str name="echoParams">explicitstr> lst> requestHandler> <initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell,/browse"> <lst name="defaults"> <str name="df">_text_str> lst> initParams> Solr Cell Update Request Handler http://wiki.apache.org/solr/ExtractingRequestHandler --> <requestHandler name="/update/extract" startup="lazy" class="solr.extraction.ExtractingRequestHandler" > <lst name="defaults"> <str name="lowernames">truestr> <str name="fmap.meta">ignored_str> <str name="fmap.content">_text_str> lst> requestHandler> Search Components Search components are registered to SolrCore and used by instances of SearchHandler (which can access them by name) By default, the following components are available:Default configuration in a requestHandler would look like:queryfacetmlthighlightstatsdebugIf you register a searchComponent to one of the standard names, that will be used instead of the default. To insert components before or after the 'standard' components, use:myFirstComponentNamemyLastComponentNameNOTE: The component registered with the name "debug" will always be executed after the "last-components" --> Spell Check The spell check component can return a list of alternative spelling suggestions. http://wiki.apache.org/solr/SpellCheckComponent --> <searchComponent name="spellcheck" class="solr.SpellCheckComponent"> <str name="queryAnalyzerFieldType">text_generalstr> Multiple "Spell Checkers" can be declared and used by this component --> a spellchecker built from a field of the main index --> <lst name="spellchecker"> <str name="name">defaultstr> <str name="field">_text_str> <str name="classname">solr.DirectSolrSpellCheckerstr> the spellcheck distance measure used, the default is the internal levenshtein --> <str name="distanceMeasure">internalstr> minimum accuracy needed to be considered a valid spellcheck suggestion --> <float name="accuracy">0.5float> the maximum #edits we consider when enumerating terms: can be 1 or 2 --> <int name="maxEdits">2int> the minimum shared prefix when enumerating terms --> <int name="minPrefix">1int> maximum number of inspections per result. --> <int name="maxInspections">5int> minimum length of a query term to be considered for correction --> <int name="minQueryLength">4int> maximum threshold of documents a query term can appear to be considered for correction --> <float name="maxQueryFrequency">0.01float> uncomment this to require suggestions to occur in 1% of the documents.01--> lst> a spellchecker that can break or combine words. See "/spell" handler below for usage --> wordbreaksolr.WordBreakSolrSpellCheckernametruetrue10--> searchComponent> A request handler for demonstrating the spellcheck component. NOTE: This is purely as an example. The whole purpose of the SpellCheckComponent is to hook it into the request handler that handles your normal user queries so that a separate request is not needed to get suggestions. IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM! See http://wiki.apache.org/solr/SpellCheckComponent for details on the request parameters. --> <requestHandler name="/spell" class="solr.SearchHandler" startup="lazy"> <lst name="defaults"> Solr will use suggestions from both the 'default' spellchecker and from the 'wordbreak' spellchecker and combine them. collations (re-written queries) can include a combination of corrections from both spellcheckers --> <str name="spellcheck.dictionary">defaultstr> <str name="spellcheck">onstr> <str name="spellcheck.extendedResults">truestr> <str name="spellcheck.count">10str> <str name="spellcheck.alternativeTermCount">5str> <str name="spellcheck.maxResultsForSuggest">5str> <str name="spellcheck.collate">truestr> <str name="spellcheck.collateExtendedResults">truestr> <str name="spellcheck.maxCollationTries">10str> <str name="spellcheck.maxCollations">5str> lst> <arr name="last-components"> <str>spellcheckstr> arr> requestHandler> Term Vector Component http://wiki.apache.org/solr/TermVectorComponent --> <searchComponent name="tvComponent" class="solr.TermVectorComponent"/> A request handler for demonstrating the term vector component This is purely as an example. In reality you will likely want to add the component to your already specified request handlers. --> <requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy"> <lst name="defaults"> <bool name="tv">truebool> lst> <arr name="last-components"> <str>tvComponentstr> arr> requestHandler> Clustering Component. (Omitted here. See the default Solr example for a typical configuration.) --> Terms Component http://wiki.apache.org/solr/TermsComponent A component to return terms and document frequency of those terms --> <searchComponent name="terms" class="solr.TermsComponent"/> A request handler for demonstrating the terms component --> <requestHandler name="/terms" class="solr.SearchHandler" startup="lazy"> <lst name="defaults"> <bool name="terms">truebool> <bool name="distrib">falsebool> lst> <arr name="components"> <str>termsstr> arr> requestHandler> Query Elevation Component http://wiki.apache.org/solr/QueryElevationComponent a search component that enables you to configure the top results for a given query regardless of the normal lucene scoring. --> <searchComponent name="elevator" class="solr.QueryElevationComponent" > pick a fieldType to analyze queries --> <str name="queryFieldType">stringstr> searchComponent> A request handler for demonstrating the elevator component --> <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy"> <lst name="defaults"> <str name="echoParams">explicitstr> lst> <arr name="last-components"> <str>elevatorstr> arr> requestHandler> Highlighting Component http://wiki.apache.org/solr/HighlightingParameters --> <searchComponent class="solr.HighlightComponent" name="highlight"> <highlighting> Configure the standard fragmenter --> This could most likely be commented out in the "default" case --> <fragmenter name="gap" default="true" class="solr.highlight.GapFragmenter"> <lst name="defaults"> <int name="hl.fragsize">100int> lst> fragmenter> A regular-expression-based fragmenter (for sentence extraction) --> <fragmenter name="regex" class="solr.highlight.RegexFragmenter"> <lst name="defaults"> slightly smaller fragsizes work better because of slop --> <int name="hl.fragsize">70int> allow 50% slop on fragment sizes --> <float name="hl.regex.slop">0.5float> a basic sentence pattern --> <str name="hl.regex.pattern">[-\w ,/\n\"']{20,200}str> lst> fragmenter> Configure the standard formatter --> <formatter name="html" default="true" class="solr.highlight.HtmlFormatter"> <lst name="defaults"> <str name="hl.simple.pre"><![CDATA[]]>str> <str name="hl.simple.post"><![CDATA[]]>str> lst> formatter> Configure the standard encoder --> <encoder name="html" class="solr.highlight.HtmlEncoder" /> Configure the standard fragListBuilder --> <fragListBuilder name="simple" class="solr.highlight.SimpleFragListBuilder"/> Configure the single fragListBuilder --> <fragListBuilder name="single" class="solr.highlight.SingleFragListBuilder"/> Configure the weighted fragListBuilder --> <fragListBuilder name="weighted" default="true" class="solr.highlight.WeightedFragListBuilder"/> default tag FragmentsBuilder --> <fragmentsBuilder name="default" default="true" class="solr.highlight.ScoreOrderFragmentsBuilder"> /--> fragmentsBuilder> multi-colored tag FragmentsBuilder --> <fragmentsBuilder name="colored" class="solr.highlight.ScoreOrderFragmentsBuilder"> <lst name="defaults"> <str name="hl.tag.pre"><![CDATA[ ,, ,, ,, ,, ,]]>str> <str name="hl.tag.post"><![CDATA[]]>str> lst> fragmentsBuilder> <boundaryScanner name="default" default="true" class="solr.highlight.SimpleBoundaryScanner"> <lst name="defaults"> <str name="hl.bs.maxScan">10str> <str name="hl.bs.chars">.,!? str> lst> boundaryScanner> <boundaryScanner name="breakIterator" class="solr.highlight.BreakIteratorBoundaryScanner"> <lst name="defaults"> type should be one of CHARACTER, WORD(default), LINE and SENTENCE --> <str name="hl.bs.type">WORDstr> language and country are used when constructing Locale object. --> And the Locale object will be used when getting instance of BreakIterator --> <str name="hl.bs.language">enstr> <str name="hl.bs.country">USstr> lst> boundaryScanner> highlighting> searchComponent> Update Processors Chains of Update Processor Factories for dealing with Update Requests can be declared, and then used by name in Update Request Processors http://wiki.apache.org/solr/UpdateRequestProcessor --> Add unknown fields to the schema Field type guessing update processors that will attempt to parse string-typed field values as Booleans, Longs, Doubles, or Dates, and then add schema fields with the guessed field types. Text content will be indexed as "text_general" as well as a copy to a plain string version in *_str. These require that the schema is both managed and mutable, by declaring schemaFactory as ManagedIndexSchemaFactory, with mutable specified as true. See http://wiki.apache.org/solr/GuessingFieldTypes --> <updateProcessor class="solr.UUIDUpdateProcessorFactory" name="uuid"/> <updateProcessor class="solr.RemoveBlankFieldUpdateProcessorFactory" name="remove-blank"/> <updateProcessor class="solr.FieldNameMutatingUpdateProcessorFactory" name="field-name-mutating"> <str name="pattern">[^\w-\.]str> <str name="replacement">_str> updateProcessor> <updateProcessor class="solr.ParseBooleanFieldUpdateProcessorFactory" name="parse-boolean"/> <updateProcessor class="solr.ParseLongFieldUpdateProcessorFactory" name="parse-long"/> <updateProcessor class="solr.ParseDoubleFieldUpdateProcessorFactory" name="parse-double"/> <updateProcessor class="solr.ParseDateFieldUpdateProcessorFactory" name="parse-date"> <arr name="format"> <str>yyyy-MM-dd'T'HH:mm:ss.SSSZstr> <str>yyyy-MM-dd'T'HH:mm:ss,SSSZstr> <str>yyyy-MM-dd'T'HH:mm:ss.SSSstr> <str>yyyy-MM-dd'T'HH:mm:ss,SSSstr> <str>yyyy-MM-dd'T'HH:mm:ssZstr> <str>yyyy-MM-dd'T'HH:mm:ssstr> <str>yyyy-MM-dd'T'HH:mmZstr> <str>yyyy-MM-dd'T'HH:mmstr> <str>yyyy-MM-dd HH:mm:ss.SSSZstr> <str>yyyy-MM-dd HH:mm:ss,SSSZstr> <str>yyyy-MM-dd HH:mm:ss.SSSstr> <str>yyyy-MM-dd HH:mm:ss,SSSstr> <str>yyyy-MM-dd HH:mm:ssZstr> <str>yyyy-MM-dd HH:mm:ssstr> <str>yyyy-MM-dd HH:mmZstr> <str>yyyy-MM-dd HH:mmstr> <str>yyyy-MM-ddstr> arr> updateProcessor> <updateProcessor class="solr.AddSchemaFieldsUpdateProcessorFactory" name="add-schema-fields"> <lst name="typeMapping"> <str name="valueClass">java.lang.Stringstr> <str name="fieldType">text_generalstr> <lst name="copyField"> <str name="dest">*_strstr> <int name="maxChars">256int> lst> Use as default mapping instead of defaultFieldType --> <bool name="default">truebool> lst> <lst name="typeMapping"> <str name="valueClass">java.lang.Booleanstr> <str name="fieldType">booleansstr> lst> <lst name="typeMapping"> <str name="valueClass">java.util.Datestr> <str name="fieldType">pdatesstr> lst> <lst name="typeMapping"> <str name="valueClass">java.lang.Longstr> <str name="valueClass">java.lang.Integerstr> <str name="fieldType">plongsstr> lst> <lst name="typeMapping"> <str name="valueClass">java.lang.Numberstr> <str name="fieldType">pdoublesstr> lst> updateProcessor> The update.autoCreateFields property can be turned to false to disable schemaless mode --> <updateRequestProcessorChain name="add-unknown-fields-to-the-schema" default="${update.autoCreateFields:true}" processor="uuid,remove-blank,field-name-mutating,parse-boolean,parse-long,parse-double,parse-date,add-schema-fields"> <processor class="solr.LogUpdateProcessorFactory"/> <processor class="solr.DistributedUpdateProcessorFactory"/> <processor class="solr.RunUpdateProcessorFactory"/> updateRequestProcessorChain> Deduplication An example dedup update processor that creates the "id" field on the fly based on the hash code of some other fields. This example has overwriteDupes set to false since we are using the id field as the signatureField and Solr will maintain uniqueness based on that anyway. --> trueidfalsename,features,catsolr.processor.Lookup3Signature--> Language identification This example update chain identifies the language of the incoming documents using the langid contrib. The detected language is written to field language_s. No field name mapping is done. The fields used for detection are text, title, subject and description, making this example suitable for detecting languages form full-text rich documents injected via ExtractingRequestHandler. See more about langId at http://wiki.apache.org/solr/LanguageDetection --> text,title,subject,descriptionlanguage_sen--> Script update processor This example hooks in an update processor implemented using JavaScript. See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor --> update-script.jsexample config parameter--> Response Writers http://wiki.apache.org/solr/QueryResponseWriter Request responses will be written using the writer specified by the 'wt' request parameter matching the name of a registered writer. The "default" writer is the default and will be used if 'wt' is not specified in the request. --> The following response writers are implicitly configured unless overridden... --> --> <queryResponseWriter name="json" class="solr.JSONResponseWriter"> For the purposes of the tutorial, JSON responses are written as plain text so that they are easy to read in *any* browser. If you expect a MIME type of "application/json" just remove this override. --> <str name="content-type">text/plain; charset=UTF-8str> queryResponseWriter> Custom response writers can be declared as needed... --> <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy"> <str name="template.base.dir">${velocity.template.base.dir:}str> <str name="solr.resource.loader.enabled">${velocity.solr.resource.loader.enabled:true}str> <str name="params.resource.loader.enabled">${velocity.params.resource.loader.enabled:false}str> queryResponseWriter> XSLT response writer transforms the XML output by any xslt file found in Solr's conf/xslt directory. Changes to xslt files are checked for every xsltCacheLifetimeSeconds. --> <queryResponseWriter name="xslt" class="solr.XSLTResponseWriter"> <int name="xsltCacheLifetimeSeconds">5int> queryResponseWriter> Query Parsers https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html Multiple QParserPlugins can be registered by name, and then used in either the "defType" param for the QueryComponent (used by SearchHandler) or in LocalParams --> example of registering a query parser --> --> Function Parsers http://wiki.apache.org/solr/FunctionQuery Multiple ValueSourceParsers can be registered by name, and then used as function names when using the "func" QParser. --> example of registering a custom function parser --> --> Document Transformers http://wiki.apache.org/solr/DocTransformers --> Could be something like:jdbc://....To add a constant value to all docs, use:5If you want the user to still be able to change it with _value:something_ use this:5If you are using the QueryElevationComponent, you may wish to mark documents that get boosted. The EditorialMarkerFactory will do exactly that:-->config>
View Code
3.3 修改managed-schema文件
增加需要露出的field,注意,设置了uniquekey的话,id是一定要有的。
<?xml version="1.0" encoding="UTF-8" ?> Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.--> This example schema is the recommended starting point for users. It should be kept correct and concise, usable out-of-the-box. For more information, on how to customize this file, please see http://lucene.apache.org/solr/guide/documents-fields-and-schema-design.html PERFORMANCE NOTE: this schema includes many optional features and should not be used for benchmarking. To improve performance one could - set stored="false" for all fields possible (esp large fields) when you only need to search on the field but don't need to return the original value. - set indexed="false" if you don't need to search on the field, but only return the field as a result of searching on other indexed fields. - remove all unneeded copyField statements - for best index size and searching performance, set "index" to false for all general text fields, use copyField to copy them to the catchall "text" field, and use that for searching.--><schema name="default-config" version="1.6"> attribute "name" is the name of this schema and is only used for display purposes. version="x.y" is Solr's version number for the schema syntax and semantics. It should not normally be changed by applications. 1.0: multiValued attribute did not exist, all fields are multiValued by nature 1.1: multiValued attribute introduced, false by default 1.2: omitTermFreqAndPositions attribute introduced, true by default except for text fields. 1.3: removed optional field compress feature 1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser behavior when a single string produces multiple tokens. Defaults to off for version >= 1.4 1.5: omitNorms defaults to true for primitive field types (int, float, boolean, string...) 1.6: useDocValuesAsStored defaults to true. --> Valid attributes for fields: name: mandatory - the name for the field type: mandatory - the name of a field type from the fieldTypes section indexed: true if this field should be indexed (searchable or sortable) stored: true if this field should be retrievable docValues: true if this field should have doc values. Doc Values is recommended (required, if you are using *Point fields) for faceting, grouping, sorting and function queries. Doc Values will make the index faster to load, more NRT-friendly and more memory-efficient. They are currently only supported by StrField, UUIDField, all *PointFields, and depending on the field type, they might require the field to be single-valued, be required or have a default value (check the documentation of the field type you're interested in for more information) multiValued: true if this field may contain multiple values per document omitNorms: (expert) set to true to omit the norms associated with this field (this disables length normalization and index-time boosting for the field, and saves some memory). Only full-text fields or fields that need an index-time boost need norms. Norms are omitted for primitive (non-analyzed) types by default. termVectors: [false] set to true to store the term vector for a given field. When using MoreLikeThis, fields used for similarity should be stored for best performance. termPositions: Store position information with the term vector. This will increase storage costs. termOffsets: Store offset information with the term vector. This will increase storage costs. required: The field is required. It will throw an error if the value does not exist default: a value that should be used if no value is specified when adding a document. --> field names should consist of alphanumeric or underscore characters only and not start with a digit. This is not currently strictly enforced, but other field names will not have first class support from all components and back compatibility is not guaranteed. Names with both leading and trailing underscores (e.g. _version_) are reserved. --> In this _default configset, only four fields are pre-declared: id, _version_, and _text_ and _root_. All other fields will be type guessed and added via the "add-unknown-fields-to-the-schema" update request processor chain declared in solrconfig.xml. Note that many dynamic fields are also defined - you can use them to specify a field's type via field naming conventions - see below. WARNING: The _text_ catch-all field will significantly increase your index size. If you don't need it, consider removing it and the corresponding copyField directive. --> <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="nick_name" type="string" indexed="true" stored="true" multiValued="false" /> <field name="is_delete" type="string" indexed="true" stored="true" multiValued="false" /> <field name="field_name" type="string" indexed="true" stored="true" multiValued="false" /> <field name="industry_name" type="string" indexed="true" stored="true" multiValued="false" /> <field name="sex" type="string" indexed="true" stored="true" multiValued="false" /> <field name="phone" type="string" indexed="true" stored="true" multiValued="false" /> <field name="email" type="string" indexed="true" stored="true" multiValued="false" /> docValues are enabled by default for long type so we don't need to index the version field --> <field name="_version_" type="plong" indexed="false" stored="false"/> <field name="_root_" type="string" indexed="true" stored="false" docValues="false" /> <field name="_text_" type="text_general" indexed="true" stored="false" multiValued="true"/> This can be enabled, in case the client does not know what fields may be searched. It isn't enabled by default because it's very expensive to index everything twice. --> --> Dynamic field definitions allow using convention over configuration for fields via the specification of patterns to match field names. EXAMPLE: name="*_i" will match any field ending in _i (like myid_i, z_i) RESTRICTION: the glob-like pattern in the name attribute must have a "*" only at the start or the end. --> <dynamicField name="*_i" type="pint" indexed="true" stored="true"/> <dynamicField name="*_is" type="pints" indexed="true" stored="true"/> <dynamicField name="*_s" type="string" indexed="true" stored="true" /> <dynamicField name="*_ss" type="strings" indexed="true" stored="true"/> <dynamicField name="*_l" type="plong" indexed="true" stored="true"/> <dynamicField name="*_ls" type="plongs" indexed="true" stored="true"/> <dynamicField name="*_t" type="text_general" indexed="true" stored="true" multiValued="false"/> <dynamicField name="*_txt" type="text_general" indexed="true" stored="true"/> <dynamicField name="*_b" type="boolean" indexed="true" stored="true"/> <dynamicField name="*_bs" type="booleans" indexed="true" stored="true"/> <dynamicField name="*_f" type="pfloat" indexed="true" stored="true"/> <dynamicField name="*_fs" type="pfloats" indexed="true" stored="true"/> <dynamicField name="*_d" type="pdouble" indexed="true" stored="true"/> <dynamicField name="*_ds" type="pdoubles" indexed="true" stored="true"/> <dynamicField name="random_*" type="random"/> Type used for data-driven schema, to add a string copy for each text field --> <dynamicField name="*_str" type="strings" stored="false" docValues="true" indexed="false" useDocValuesAsStored="false"/> <dynamicField name="*_dt" type="pdate" indexed="true" stored="true"/> <dynamicField name="*_dts" type="pdate" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_p" type="location" indexed="true" stored="true"/> <dynamicField name="*_srpt" type="location_rpt" indexed="true" stored="true"/> payloaded dynamic fields --> <dynamicField name="*_dpf" type="delimited_payloads_float" indexed="true" stored="true"/> <dynamicField name="*_dpi" type="delimited_payloads_int" indexed="true" stored="true"/> <dynamicField name="*_dps" type="delimited_payloads_string" indexed="true" stored="true"/> <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/> Field to use to determine and enforce document uniqueness. Unless this field is marked with required="false", it will be a required field --> <uniqueKey>iduniqueKey> copyField commands copy one field to another at the time a document is added to the index. It's used either to index the same field differently, or to add multiple fields to the same field for easier/faster searching.--> field type definitions. The "name" attribute is just a label to be used by field definitions. The "class" attribute and any other attributes determine the real behavior of the fieldType. Class names starting with "solr" refer to java classes in a standard package such as org.apache.solr.analysis --> sortMissingLast and sortMissingFirst attributes are optional attributes are currently supported on types that are sorted internally as strings and on numeric types. This includes "string", "boolean", "pint", "pfloat", "plong", "pdate", "pdouble". - If sortMissingLast="true", then a sort on this field will cause documents without the field to come after documents with the field, regardless of the requested sort order (asc or desc). - If sortMissingFirst="true", then a sort on this field will cause documents without the field to come before documents with the field, regardless of the requested sort order. - If sortMissingLast="false" and sortMissingFirst="false" (the default), then default lucene sorting will be used which places docs without the field first in an ascending sort and last in a descending sort. --> The StrField type is not analyzed, but indexed/stored verbatim. --> <fieldType name="string" class="solr.StrField" sortMissingLast="true" docValues="true" /> <fieldType name="strings" class="solr.StrField" sortMissingLast="true" multiValued="true" docValues="true" /> boolean type: "true" or "false" --> <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/> <fieldType name="booleans" class="solr.BoolField" sortMissingLast="true" multiValued="true"/> Numeric field types that index values using KD-trees. Point fields don't support FieldCache, so they must have docValues="true" if needed for sorting, faceting, functions, etc. --> <fieldType name="pint" class="solr.IntPointField" docValues="true"/> <fieldType name="pfloat" class="solr.FloatPointField" docValues="true"/> <fieldType name="plong" class="solr.LongPointField" docValues="true"/> <fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/> <fieldType name="pints" class="solr.IntPointField" docValues="true" multiValued="true"/> <fieldType name="pfloats" class="solr.FloatPointField" docValues="true" multiValued="true"/> <fieldType name="plongs" class="solr.LongPointField" docValues="true" multiValued="true"/> <fieldType name="pdoubles" class="solr.DoublePointField" docValues="true" multiValued="true"/> <fieldType name="random" class="solr.RandomSortField" indexed="true"/> The format for this date field is of the form 1995-12-31T23:59:59Z, and is a more restricted form of the canonical representation of dateTime http://www.w3.org/TR/xmlschema-2/#dateTime The trailing "Z" designates UTC time and is mandatory. Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z All other components are mandatory. Expressions can also be used to denote calculations that should be performed relative to "NOW" to determine the value, ie... NOW/HOUR ... Round to the start of the current hour NOW-1DAY ... Exactly 1 day prior to now NOW/DAY+6MONTHS+3DAYS ... 6 months and 3 days in the future from the start of the current day --> KD-tree versions of date fields --> <fieldType name="pdate" class="solr.DatePointField" docValues="true"/> <fieldType name="pdates" class="solr.DatePointField" docValues="true" multiValued="true"/> Binary data type. The data should be sent/retrieved in as Base64 encoded Strings --> <fieldType name="binary" class="solr.BinaryField"/> solr.TextField allows the specification of custom text analyzers specified as a tokenizer and a list of token filters. Different analyzers may be specified for indexing and querying. The optional positionIncrementGap puts space between multiple fields of this type on the same document, with the purpose of preventing false phrase matching across fields. For more info on customizing your analyzer chain, please see http://lucene.apache.org/solr/guide/understanding-analyzers-tokenizers-and-filters.html#understanding-analyzers-tokenizers-and-filters --> One can also specify an existing Analyzer class that has a default constructor via the class attribute on the analyzer element. Example:--> A text field that only splits on whitespace for exact matching of words --> <dynamicField name="*_ws" type="text_ws" indexed="true" stored="true"/> <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> analyzer> fieldType> A general text field that has reasonable, generic cross-language defaults: it tokenizes with StandardTokenizer, removes stop words from case-insensitive "stopwords.txt" (empty by default), and down cases. At query time only, it also applies synonyms. --> <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> in this example, we will only use synonyms at query time--> <filter class="solr.LowerCaseFilterFactory"/> analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.LowerCaseFilterFactory"/> analyzer> fieldType> SortableTextField generaly functions exactly like TextField, except that it supports, and by default uses, docValues for sorting (or faceting) on the first 1024 characters of the original field values (which is configurable). This makes it a bit more useful then TextField in many situations, but the trade-off is that it takes up more space on disk; which is why it's not used in place of TextField for every fieldType in this _default schema. --> <dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" multiValued="false"/> <dynamicField name="*_txt_sort" type="text_gen_sort" indexed="true" stored="true"/> <fieldType name="text_gen_sort" class="solr.SortableTextField" positionIncrementGap="100" multiValued="true"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.LowerCaseFilterFactory"/> analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.LowerCaseFilterFactory"/> analyzer> fieldType> A text field with defaults appropriate for English: it tokenizes with StandardTokenizer, removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and finally applies Porter's stemming. The query time analyzer also applies synonyms from synonyms.txt. --> <dynamicField name="*_txt_en" type="text_en" indexed="true" stored="true"/> <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> in this example, we will only use synonyms at query time--> Case insensitive stop word removal. --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPossessiveFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:--> <filter class="solr.PorterStemFilterFactory"/> analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPossessiveFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:--> <filter class="solr.PorterStemFilterFactory"/> analyzer> fieldType> A text field with defaults appropriate for English, plus aggressive word-splitting and autophrase features enabled. This field is just like text_en, except it adds WordDelimiterGraphFilter to enable splitting and matching of words on case-change, alpha numeric boundaries, and non-alphanumeric chars. This means certain compound word cases will work, for example query "wi fi" will match document "WiFi" or "wi-fi". --> <dynamicField name="*_txt_en_split" type="text_en_splitting" indexed="true" stored="true"/> <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> in this example, we will only use synonyms at query time--> Case insensitive stop word removal. --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.PorterStemFilterFactory"/> <filter class="solr.FlattenGraphFilterFactory" /> analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.PorterStemFilterFactory"/> analyzer> fieldType> Less flexible matching, but less false matches. Probably not ideal for product names, but may be good for SKUs. Can insert dashes in the wrong place and still match. --> <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight" indexed="true" stored="true"/> <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/> <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.EnglishMinimalStemFilterFactory"/> this filter can remove any duplicate tokens that appear at the same position - sometimes possible with WordDelimiterGraphFilter in conjuncton with stemming. --> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> <filter class="solr.FlattenGraphFilterFactory" /> analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/> <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.EnglishMinimalStemFilterFactory"/> this filter can remove any duplicate tokens that appear at the same position - sometimes possible with WordDelimiterGraphFilter in conjuncton with stemming. --> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> analyzer> fieldType> Just like text_general except it reverses the characters of each token, to enable more efficient leading wildcard queries. --> <dynamicField name="*_txt_rev" type="text_general_rev" indexed="true" stored="true"/> <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true" maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/> analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.LowerCaseFilterFactory"/> analyzer> fieldType> <dynamicField name="*_phon_en" type="phonetic_en" indexed="true" stored="true"/> <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" > <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/> analyzer> fieldType> lowercases the entire field value, keeping it as a single token. --> <dynamicField name="*_s_lower" type="lowercase" indexed="true" stored="true"/> <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.KeywordTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory" /> analyzer> fieldType> Example of using PathHierarchyTokenizerFactory at index time, so queries for paths match documents at that path, or in descendent paths --> <dynamicField name="*_descendent_path" type="descendent_path" indexed="true" stored="true"/> <fieldType name="descendent_path" class="solr.TextField"> <analyzer type="index"> <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" /> analyzer> <analyzer type="query"> <tokenizer class="solr.KeywordTokenizerFactory" /> analyzer> fieldType> Example of using PathHierarchyTokenizerFactory at query time, so queries for paths match documents at that path, or in ancestor paths --> <dynamicField name="*_ancestor_path" type="ancestor_path" indexed="true" stored="true"/> <fieldType name="ancestor_path" class="solr.TextField"> <analyzer type="index"> <tokenizer class="solr.KeywordTokenizerFactory" /> analyzer> <analyzer type="query"> <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" /> analyzer> fieldType> This point type indexes the coordinates as separate fields (subFields) If subFieldType is defined, it references a type, and a dynamic field definition is created matching *___. Alternately, if subFieldSuffix is defined, that is used to create the subFields. Example: if subFieldType="double", then the coordinates would be indexed in fields myloc_0___double,myloc_1___double. Example: if subFieldSuffix="_d" then the coordinates would be indexed in fields myloc_0_d,myloc_1_d The subFields are an implementation detail of the fieldType, and end users normally should not need to know about them. --> <dynamicField name="*_point" type="point" indexed="true" stored="true"/> <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/> A specialized field for geospatial search filters and distance sorting. --> <fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/> A geospatial field type that supports multiValued and polygon shapes. For more information about this and other spatial fields see: http://lucene.apache.org/solr/guide/spatial-search.html --> <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" /> Payloaded field types --> <fieldType name="delimited_payloads_float" stored="false" indexed="true" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="float"/> analyzer> fieldType> <fieldType name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="integer"/> analyzer> fieldType> <fieldType name="delimited_payloads_string" stored="false" indexed="true" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="identity"/> analyzer> fieldType> some examples for different languages (generally ordered by ISO code) --> Arabic --> <dynamicField name="*_txt_ar" type="text_ar" indexed="true" stored="true"/> <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> for any non-arabic --> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ar.txt" /> normalizes ﻯ to ﻱ, etc --> <filter class="solr.ArabicNormalizationFilterFactory"/> <filter class="solr.ArabicStemFilterFactory"/> analyzer> fieldType> Bulgarian --> <dynamicField name="*_txt_bg" type="text_bg" indexed="true" stored="true"/> <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_bg.txt" /> <filter class="solr.BulgarianStemFilterFactory"/> analyzer> fieldType> Catalan --> <dynamicField name="*_txt_ca" type="text_ca" indexed="true" stored="true"/> <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> removes l', etc --> <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ca.txt"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ca.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Catalan"/> analyzer> fieldType> CJK bigram (see text_ja for a Japanese configuration using morphological analysis) --> <dynamicField name="*_txt_cjk" type="text_cjk" indexed="true" stored="true"/> <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> normalize width before bigram, as e.g. half-width dakuten combine --> <filter class="solr.CJKWidthFilterFactory"/> for any non-CJK --> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.CJKBigramFilterFactory"/> analyzer> fieldType> Czech --> <dynamicField name="*_txt_cz" type="text_cz" indexed="true" stored="true"/> <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_cz.txt" /> <filter class="solr.CzechStemFilterFactory"/> analyzer> fieldType> Danish --> <dynamicField name="*_txt_da" type="text_da" indexed="true" stored="true"/> <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Danish"/> analyzer> fieldType> German --> <dynamicField name="*_txt_de" type="text_de" indexed="true" stored="true"/> <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" /> <filter class="solr.GermanNormalizationFilterFactory"/> <filter class="solr.GermanLightStemFilterFactory"/> less aggressive:--> more aggressive:--> analyzer> fieldType> Greek --> <dynamicField name="*_txt_el" type="text_el" indexed="true" stored="true"/> <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> greek specific lowercase for sigma --> <filter class="solr.GreekLowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_el.txt" /> <filter class="solr.GreekStemFilterFactory"/> analyzer> fieldType> Spanish --> <dynamicField name="*_txt_es" type="text_es" indexed="true" stored="true"/> <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" /> <filter class="solr.SpanishLightStemFilterFactory"/> more aggressive:--> analyzer> fieldType> Basque --> <dynamicField name="*_txt_eu" type="text_eu" indexed="true" stored="true"/> <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_eu.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Basque"/> analyzer> fieldType> Persian --> <dynamicField name="*_txt_fa" type="text_fa" indexed="true" stored="true"/> <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100"> <analyzer> for ZWNJ --> <charFilter class="solr.PersianCharFilterFactory"/> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.ArabicNormalizationFilterFactory"/> <filter class="solr.PersianNormalizationFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fa.txt" /> analyzer> fieldType> Finnish --> <dynamicField name="*_txt_fi" type="text_fi" indexed="true" stored="true"/> <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Finnish"/> less aggressive:--> analyzer> fieldType> French --> <dynamicField name="*_txt_fr" type="text_fr" indexed="true" stored="true"/> <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> removes l', etc --> <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_fr.txt"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" /> <filter class="solr.FrenchLightStemFilterFactory"/> less aggressive:--> more aggressive:--> analyzer> fieldType> Irish --> <dynamicField name="*_txt_ga" type="text_ga" indexed="true" stored="true"/> <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> removes d', etc --> <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ga.txt"/> removes n-, etc. position increments is intentionally false! --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/hyphenations_ga.txt"/> <filter class="solr.IrishLowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ga.txt"/> <filter class="solr.SnowballPorterFilterFactory" language="Irish"/> analyzer> fieldType> Galician --> <dynamicField name="*_txt_gl" type="text_gl" indexed="true" stored="true"/> <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_gl.txt" /> <filter class="solr.GalicianStemFilterFactory"/> less aggressive:--> analyzer> fieldType> Hindi --> <dynamicField name="*_txt_hi" type="text_hi" indexed="true" stored="true"/> <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> normalizes unicode representation --> <filter class="solr.IndicNormalizationFilterFactory"/> normalizes variation in spelling --> <filter class="solr.HindiNormalizationFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hi.txt" /> <filter class="solr.HindiStemFilterFactory"/> analyzer> fieldType> Hungarian --> <dynamicField name="*_txt_hu" type="text_hu" indexed="true" stored="true"/> <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Hungarian"/> less aggressive:--> analyzer> fieldType> Armenian --> <dynamicField name="*_txt_hy" type="text_hy" indexed="true" stored="true"/> <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hy.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Armenian"/> analyzer> fieldType> Indonesian --> <dynamicField name="*_txt_id" type="text_id" indexed="true" stored="true"/> <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_id.txt" /> for a less aggressive approach (only inflectional suffixes), set stemDerivational to false --> <filter class="solr.IndonesianStemFilterFactory" stemDerivational="true"/> analyzer> fieldType> Italian --> <dynamicField name="*_txt_it" type="text_it" indexed="true" stored="true"/> <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> removes l', etc --> <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_it.txt"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" /> <filter class="solr.ItalianLightStemFilterFactory"/> more aggressive:--> analyzer> fieldType> Japanese using morphological analysis (see text_cjk for a configuration using bigramming) NOTE: If you want to optimize search for precision, use default operator AND in your request handler config (q.op) Use OR if you would like to optimize for recall (default). --> <dynamicField name="*_txt_ja" type="text_ja" indexed="true" stored="true"/> <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false"> <analyzer> Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer) Kuromoji has a search mode (default) that does segmentation useful for search. A heuristic is used to segment compounds into its parts and the compound itself is kept as synonym. Valid values for attribute mode are: normal: regular segmentation search: segmentation useful for search with synonyms compounds (default) extended: same as search mode, but unigrams unknown words (experimental) For some applications it might be good to use search mode for indexing and normal mode for queries to reduce recall and prevent parts of compounds from being matched and highlighted. Useandfor this and mode normal in query. Kuromoji also has a convenient user dictionary feature that allows overriding the statistical model with your own entries for segmentation, part-of-speech tags and readings without a need to specify weights. Notice that user dictionaries have not been subject to extensive testing. User dictionary attributes are: userDictionary: user dictionary filename userDictionaryEncoding: user dictionary encoding (default is UTF-8) See lang/userdict_ja.txt for a sample user dictionary file. Punctuation characters are discarded by default. Use discardPunctuation="false" to keep them. --> <tokenizer class="solr.JapaneseTokenizerFactory" mode="search"/> --> Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) --> <filter class="solr.JapaneseBaseFormFilterFactory"/> Removes tokens with certain part-of-speech tags --> <filter class="solr.JapanesePartOfSpeechStopFilterFactory" tags="lang/stoptags_ja.txt" /> Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) --> <filter class="solr.CJKWidthFilterFactory"/> Removes common tokens typically not useful for search, but have a negative effect on ranking --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ja.txt" /> Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) --> <filter class="solr.JapaneseKatakanaStemFilterFactory" minimumLength="4"/> Lower-cases romaji characters --> <filter class="solr.LowerCaseFilterFactory"/> analyzer> fieldType> Korean morphological analysis --> <dynamicField name="*_txt_ko" type="text_ko" indexed="true" stored="true"/> <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100"> <analyzer> Nori Korean morphological analyzer/tokenizer (KoreanTokenizer) The Korean (nori) analyzer integrates Lucene nori analysis module into Solr. It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts. This dictionary was built with MeCab, it defines a format for the features adapted for the Korean language. Nori also has a convenient user dictionary feature that allows overriding the statistical model with your own entries for segmentation, part-of-speech tags and readings without a need to specify weights. Notice that user dictionaries have not been subject to extensive testing. The tokenizer supports multiple schema attributes: * userDictionary: User dictionary path. * userDictionaryEncoding: User dictionary encoding. * decompoundMode: Decompound mode. Either 'none', 'discard', 'mixed'. Default is 'discard'. * outputUnknownUnigrams: If true outputs unigrams for unknown words. --> <tokenizer class="solr.KoreanTokenizerFactory" decompoundMode="discard" outputUnknownUnigrams="false"/> Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags', listing the tags to remove. By default it removes: E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV This is basically an equivalent to stemming. --> <filter class="solr.KoreanPartOfSpeechStopFilterFactory" /> Replaces term text with the Hangul transcription of Hanja characters, if applicable: --> <filter class="solr.KoreanReadingFormFilterFactory" /> <filter class="solr.LowerCaseFilterFactory" /> analyzer> fieldType> Latvian --> <dynamicField name="*_txt_lv" type="text_lv" indexed="true" stored="true"/> <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_lv.txt" /> <filter class="solr.LatvianStemFilterFactory"/> analyzer> fieldType> Dutch --> <dynamicField name="*_txt_nl" type="text_nl" indexed="true" stored="true"/> <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" /> <filter class="solr.StemmerOverrideFilterFactory" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/> <filter class="solr.SnowballPorterFilterFactory" language="Dutch"/> analyzer> fieldType> Norwegian --> <dynamicField name="*_txt_no" type="text_no" indexed="true" stored="true"/> <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Norwegian"/> less aggressive:--> singular/plural:--> analyzer> fieldType> Portuguese --> <dynamicField name="*_txt_pt" type="text_pt" indexed="true" stored="true"/> <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" /> <filter class="solr.PortugueseLightStemFilterFactory"/> less aggressive:--> more aggressive:--> most aggressive:--> analyzer> fieldType> Romanian --> <dynamicField name="*_txt_ro" type="text_ro" indexed="true" stored="true"/> <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ro.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Romanian"/> analyzer> fieldType> Russian --> <dynamicField name="*_txt_ru" type="text_ru" indexed="true" stored="true"/> <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Russian"/> less aggressive:--> analyzer> fieldType> Swedish --> <dynamicField name="*_txt_sv" type="text_sv" indexed="true" stored="true"/> <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Swedish"/> less aggressive:--> analyzer> fieldType> Thai --> <dynamicField name="*_txt_th" type="text_th" indexed="true" stored="true"/> <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.ThaiTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_th.txt" /> analyzer> fieldType> Turkish --> <dynamicField name="*_txt_tr" type="text_tr" indexed="true" stored="true"/> <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.TurkishLowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_tr.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Turkish"/> analyzer> fieldType> Similarity is the scoring routine for each document vs. a query. A custom Similarity or SimilarityFactory may be specified here, but the default is fine for most applications. For more info: http://lucene.apache.org/solr/guide/other-schema-elements.html#OtherSchemaElements-Similarity --> param value-->schema>
View Code
四、导入数据
4.1 利用solr Admin界面来操作
如果不知道怎么用这个界面的,可自行搜索下solr admin界面解析。
4.2 查询导入的数据
点击查询按钮后,可以看到小白这条数据已经导入了。参考:https://lucene.apache.org/solr/guide/7_7/uploading-structured-data-store-data-with-the-data-import-handler.html
五、对数据进行操作
5.1 增加或修改一条数据
增加的数据是保存到solr的内存里,而不是保存到数据库里。增加、修改、删除,都是用/update命令。修改的时候,把查询出来的json复制到这里,然后进行修改就行(注意双引号,下图的属性比如nick_name也是要加双引号的,这里漏了)。实际上修改操作就是先删除,然后新增加,所以会覆盖数据,要谨慎操作。注意,_version字段是索引的版本号,不能手动修改,不要复制到这里。
5.2 删除一条数据
删除数据,要使用XML格式,指定删掉的索引id就行
<delete><query>*:*query>delete><commit/>
View Code
5.3 查询
可以看到,id为2的数据被删掉了,增加了一条小王的数据。更多查询的函数可以访问:https://lucene.apache.org/solr/guide/7_2/function-queries.html
六、全量和增量导入数据
6.1.全量导入数据
调用全量接口即可:http://192.168.88.49:8983/solr/core1/dataimport?command=full-import&clean=true&commit=true
6.2.增量导入数据
首先在表里增加一个时间字段
接着修改data-config.xml文件
<?xml version="1.0" encoding="UTF-8" ?><dataConfig> <dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://192.168.33.95:3306/solr" user="root" password="123456" /> <document name="testDoc"> <entity name="tj_student" query ="select ts.*, tf.field_name, ti.industry_name from tj_student ts LEFT JOIN tj_user_field tuf ON ts.id=tuf.student_id LEFT JOIN tj_field tf ON tuf.user_field_id=tf.id LEFT JOIN tj_user_industry tui ON ts.id=tui.student_id LEFT JOIN tj_industry ti ON tui.industry_id=ti.id where is_delete=0" deltaQuery ="select id from tj_student where create_time > '${dataimporter.last_index_time}' and is_delete=0" deletedPkQuery ="select id from tj_student where is_delete=1" deltaImportQuery="select ts.*, tf.field_name, ti.industry_name from tj_student ts LEFT JOIN tj_user_field tuf ON ts.id=tuf.student_id LEFT JOIN tj_field tf ON tuf.user_field_id=tf.id LEFT JOIN tj_user_industry tui ON ts.id=tui.student_id LEFT JOIN tj_industry ti ON tui.industry_id=ti.id where ts.id='${dataimporter.delta.id}' and is_delete=0" > <entity name="user_info" query="select * from tj_user_info WHERE id=${tj_student.id}">entity> entity> document>dataConfig>
View Code
然后修改managed-schema文件
<?xml version="1.0" encoding="UTF-8" ?> Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.--> This example schema is the recommended starting point for users. It should be kept correct and concise, usable out-of-the-box. For more information, on how to customize this file, please see http://lucene.apache.org/solr/guide/documents-fields-and-schema-design.html PERFORMANCE NOTE: this schema includes many optional features and should not be used for benchmarking. To improve performance one could - set stored="false" for all fields possible (esp large fields) when you only need to search on the field but don't need to return the original value. - set indexed="false" if you don't need to search on the field, but only return the field as a result of searching on other indexed fields. - remove all unneeded copyField statements - for best index size and searching performance, set "index" to false for all general text fields, use copyField to copy them to the catchall "text" field, and use that for searching.--><schema name="default-config" version="1.6"> attribute "name" is the name of this schema and is only used for display purposes. version="x.y" is Solr's version number for the schema syntax and semantics. It should not normally be changed by applications. 1.0: multiValued attribute did not exist, all fields are multiValued by nature 1.1: multiValued attribute introduced, false by default 1.2: omitTermFreqAndPositions attribute introduced, true by default except for text fields. 1.3: removed optional field compress feature 1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser behavior when a single string produces multiple tokens. Defaults to off for version >= 1.4 1.5: omitNorms defaults to true for primitive field types (int, float, boolean, string...) 1.6: useDocValuesAsStored defaults to true. --> Valid attributes for fields: name: mandatory - the name for the field type: mandatory - the name of a field type from the fieldTypes section indexed: true if this field should be indexed (searchable or sortable) stored: true if this field should be retrievable docValues: true if this field should have doc values. Doc Values is recommended (required, if you are using *Point fields) for faceting, grouping, sorting and function queries. Doc Values will make the index faster to load, more NRT-friendly and more memory-efficient. They are currently only supported by StrField, UUIDField, all *PointFields, and depending on the field type, they might require the field to be single-valued, be required or have a default value (check the documentation of the field type you're interested in for more information) multiValued: true if this field may contain multiple values per document omitNorms: (expert) set to true to omit the norms associated with this field (this disables length normalization and index-time boosting for the field, and saves some memory). Only full-text fields or fields that need an index-time boost need norms. Norms are omitted for primitive (non-analyzed) types by default. termVectors: [false] set to true to store the term vector for a given field. When using MoreLikeThis, fields used for similarity should be stored for best performance. termPositions: Store position information with the term vector. This will increase storage costs. termOffsets: Store offset information with the term vector. This will increase storage costs. required: The field is required. It will throw an error if the value does not exist default: a value that should be used if no value is specified when adding a document. --> field names should consist of alphanumeric or underscore characters only and not start with a digit. This is not currently strictly enforced, but other field names will not have first class support from all components and back compatibility is not guaranteed. Names with both leading and trailing underscores (e.g. _version_) are reserved. --> In this _default configset, only four fields are pre-declared: id, _version_, and _text_ and _root_. All other fields will be type guessed and added via the "add-unknown-fields-to-the-schema" update request processor chain declared in solrconfig.xml. Note that many dynamic fields are also defined - you can use them to specify a field's type via field naming conventions - see below. WARNING: The _text_ catch-all field will significantly increase your index size. If you don't need it, consider removing it and the corresponding copyField directive. --> <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="nick_name" type="text_smartcn" indexed="true" stored="true" required="true" multiValued="false" /> <field name="is_delete" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="field_name" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="industry_name" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="sex" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="phone" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="email" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="create_time" type="string" indexed="true" stored="true" required="true" multiValued="false" /> docValues are enabled by default for long type so we don't need to index the version field --> <field name="_version_" type="plong" indexed="false" stored="false"/> <field name="_root_" type="string" indexed="true" stored="false" docValues="false" /> <field name="_text_" type="text_general" indexed="true" stored="false" multiValued="true"/> This can be enabled, in case the client does not know what fields may be searched. It isn't enabled by default because it's very expensive to index everything twice. --> --> Dynamic field definitions allow using convention over configuration for fields via the specification of patterns to match field names. EXAMPLE: name="*_i" will match any field ending in _i (like myid_i, z_i) RESTRICTION: the glob-like pattern in the name attribute must have a "*" only at the start or the end. --> <dynamicField name="*_i" type="pint" indexed="true" stored="true"/> <dynamicField name="*_is" type="pints" indexed="true" stored="true"/> <dynamicField name="*_s" type="string" indexed="true" stored="true" /> <dynamicField name="*_ss" type="strings" indexed="true" stored="true"/> <dynamicField name="*_l" type="plong" indexed="true" stored="true"/> <dynamicField name="*_ls" type="plongs" indexed="true" stored="true"/> <dynamicField name="*_t" type="text_general" indexed="true" stored="true" multiValued="false"/> <dynamicField name="*_txt" type="text_general" indexed="true" stored="true"/> <dynamicField name="*_b" type="boolean" indexed="true" stored="true"/> <dynamicField name="*_bs" type="booleans" indexed="true" stored="true"/> <dynamicField name="*_f" type="pfloat" indexed="true" stored="true"/> <dynamicField name="*_fs" type="pfloats" indexed="true" stored="true"/> <dynamicField name="*_d" type="pdouble" indexed="true" stored="true"/> <dynamicField name="*_ds" type="pdoubles" indexed="true" stored="true"/> <dynamicField name="random_*" type="random"/> Type used for data-driven schema, to add a string copy for each text field --> <dynamicField name="*_str" type="strings" stored="false" docValues="true" indexed="false" useDocValuesAsStored="false"/> <dynamicField name="*_dt" type="pdate" indexed="true" stored="true"/> <dynamicField name="*_dts" type="pdate" indexed="true" stored="true" multiValued="true"/> <dynamicField name="*_p" type="location" indexed="true" stored="true"/> <dynamicField name="*_srpt" type="location_rpt" indexed="true" stored="true"/> payloaded dynamic fields --> <dynamicField name="*_dpf" type="delimited_payloads_float" indexed="true" stored="true"/> <dynamicField name="*_dpi" type="delimited_payloads_int" indexed="true" stored="true"/> <dynamicField name="*_dps" type="delimited_payloads_string" indexed="true" stored="true"/> <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/> Field to use to determine and enforce document uniqueness. Unless this field is marked with required="false", it will be a required field --> <uniqueKey>iduniqueKey> copyField commands copy one field to another at the time a document is added to the index. It's used either to index the same field differently, or to add multiple fields to the same field for easier/faster searching.--> field type definitions. The "name" attribute is just a label to be used by field definitions. The "class" attribute and any other attributes determine the real behavior of the fieldType. Class names starting with "solr" refer to java classes in a standard package such as org.apache.solr.analysis --> sortMissingLast and sortMissingFirst attributes are optional attributes are currently supported on types that are sorted internally as strings and on numeric types. This includes "string", "boolean", "pint", "pfloat", "plong", "pdate", "pdouble". - If sortMissingLast="true", then a sort on this field will cause documents without the field to come after documents with the field, regardless of the requested sort order (asc or desc). - If sortMissingFirst="true", then a sort on this field will cause documents without the field to come before documents with the field, regardless of the requested sort order. - If sortMissingLast="false" and sortMissingFirst="false" (the default), then default lucene sorting will be used which places docs without the field first in an ascending sort and last in a descending sort. --> The StrField type is not analyzed, but indexed/stored verbatim. --> <fieldType name="string" class="solr.StrField" sortMissingLast="true" docValues="true" /> <fieldType name="strings" class="solr.StrField" sortMissingLast="true" multiValued="true" docValues="true" /> boolean type: "true" or "false" --> <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/> <fieldType name="booleans" class="solr.BoolField" sortMissingLast="true" multiValued="true"/> Numeric field types that index values using KD-trees. Point fields don't support FieldCache, so they must have docValues="true" if needed for sorting, faceting, functions, etc. --> <fieldType name="pint" class="solr.IntPointField" docValues="true"/> <fieldType name="pfloat" class="solr.FloatPointField" docValues="true"/> <fieldType name="plong" class="solr.LongPointField" docValues="true"/> <fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/> <fieldType name="pints" class="solr.IntPointField" docValues="true" multiValued="true"/> <fieldType name="pfloats" class="solr.FloatPointField" docValues="true" multiValued="true"/> <fieldType name="plongs" class="solr.LongPointField" docValues="true" multiValued="true"/> <fieldType name="pdoubles" class="solr.DoublePointField" docValues="true" multiValued="true"/> <fieldType name="random" class="solr.RandomSortField" indexed="true"/> The format for this date field is of the form 1995-12-31T23:59:59Z, and is a more restricted form of the canonical representation of dateTime http://www.w3.org/TR/xmlschema-2/#dateTime The trailing "Z" designates UTC time and is mandatory. Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z All other components are mandatory. Expressions can also be used to denote calculations that should be performed relative to "NOW" to determine the value, ie... NOW/HOUR ... Round to the start of the current hour NOW-1DAY ... Exactly 1 day prior to now NOW/DAY+6MONTHS+3DAYS ... 6 months and 3 days in the future from the start of the current day --> KD-tree versions of date fields --> <fieldType name="pdate" class="solr.DatePointField" docValues="true"/> <fieldType name="pdates" class="solr.DatePointField" docValues="true" multiValued="true"/> Binary data type. The data should be sent/retrieved in as Base64 encoded Strings --> <fieldType name="binary" class="solr.BinaryField"/> solr.TextField allows the specification of custom text analyzers specified as a tokenizer and a list of token filters. Different analyzers may be specified for indexing and querying. The optional positionIncrementGap puts space between multiple fields of this type on the same document, with the purpose of preventing false phrase matching across fields. For more info on customizing your analyzer chain, please see http://lucene.apache.org/solr/guide/understanding-analyzers-tokenizers-and-filters.html#understanding-analyzers-tokenizers-and-filters --> One can also specify an existing Analyzer class that has a default constructor via the class attribute on the analyzer element. Example:--> A text field that only splits on whitespace for exact matching of words --> <dynamicField name="*_ws" type="text_ws" indexed="true" stored="true"/> <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> analyzer> fieldType> A general text field that has reasonable, generic cross-language defaults: it tokenizes with StandardTokenizer, removes stop words from case-insensitive "stopwords.txt" (empty by default), and down cases. At query time only, it also applies synonyms. --> <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> in this example, we will only use synonyms at query time--> <filter class="solr.LowerCaseFilterFactory"/> analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.LowerCaseFilterFactory"/> analyzer> fieldType> SortableTextField generaly functions exactly like TextField, except that it supports, and by default uses, docValues for sorting (or faceting) on the first 1024 characters of the original field values (which is configurable). This makes it a bit more useful then TextField in many situations, but the trade-off is that it takes up more space on disk; which is why it's not used in place of TextField for every fieldType in this _default schema. --> <dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" multiValued="false"/> <dynamicField name="*_txt_sort" type="text_gen_sort" indexed="true" stored="true"/> <fieldType name="text_gen_sort" class="solr.SortableTextField" positionIncrementGap="100" multiValued="true"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.LowerCaseFilterFactory"/> analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.LowerCaseFilterFactory"/> analyzer> fieldType> A text field with defaults appropriate for English: it tokenizes with StandardTokenizer, removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and finally applies Porter's stemming. The query time analyzer also applies synonyms from synonyms.txt. --> <dynamicField name="*_txt_en" type="text_en" indexed="true" stored="true"/> <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> in this example, we will only use synonyms at query time--> Case insensitive stop word removal. --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPossessiveFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:--> <filter class="solr.PorterStemFilterFactory"/> analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EnglishPossessiveFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:--> <filter class="solr.PorterStemFilterFactory"/> analyzer> fieldType> A text field with defaults appropriate for English, plus aggressive word-splitting and autophrase features enabled. This field is just like text_en, except it adds WordDelimiterGraphFilter to enable splitting and matching of words on case-change, alpha numeric boundaries, and non-alphanumeric chars. This means certain compound word cases will work, for example query "wi fi" will match document "WiFi" or "wi-fi". --> <dynamicField name="*_txt_en_split" type="text_en_splitting" indexed="true" stored="true"/> <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> in this example, we will only use synonyms at query time--> Case insensitive stop word removal. --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.PorterStemFilterFactory"/> <filter class="solr.FlattenGraphFilterFactory" /> analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" /> <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.PorterStemFilterFactory"/> analyzer> fieldType> Less flexible matching, but less false matches. Probably not ideal for product names, but may be good for SKUs. Can insert dashes in the wrong place and still match. --> <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight" indexed="true" stored="true"/> <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/> <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.EnglishMinimalStemFilterFactory"/> this filter can remove any duplicate tokens that appear at the same position - sometimes possible with WordDelimiterGraphFilter in conjuncton with stemming. --> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> <filter class="solr.FlattenGraphFilterFactory" /> analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/> <filter class="solr.WordDelimiterGraphFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/> <filter class="solr.EnglishMinimalStemFilterFactory"/> this filter can remove any duplicate tokens that appear at the same position - sometimes possible with WordDelimiterGraphFilter in conjuncton with stemming. --> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> analyzer> fieldType> Just like text_general except it reverses the characters of each token, to enable more efficient leading wildcard queries. --> <dynamicField name="*_txt_rev" type="text_general_rev" indexed="true" stored="true"/> <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true" maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/> analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" /> <filter class="solr.LowerCaseFilterFactory"/> analyzer> fieldType> <dynamicField name="*_phon_en" type="phonetic_en" indexed="true" stored="true"/> <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" > <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/> analyzer> fieldType> lowercases the entire field value, keeping it as a single token. --> <dynamicField name="*_s_lower" type="lowercase" indexed="true" stored="true"/> <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.KeywordTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory" /> analyzer> fieldType> Example of using PathHierarchyTokenizerFactory at index time, so queries for paths match documents at that path, or in descendent paths --> <dynamicField name="*_descendent_path" type="descendent_path" indexed="true" stored="true"/> <fieldType name="descendent_path" class="solr.TextField"> <analyzer type="index"> <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" /> analyzer> <analyzer type="query"> <tokenizer class="solr.KeywordTokenizerFactory" /> analyzer> fieldType> Example of using PathHierarchyTokenizerFactory at query time, so queries for paths match documents at that path, or in ancestor paths --> <dynamicField name="*_ancestor_path" type="ancestor_path" indexed="true" stored="true"/> <fieldType name="ancestor_path" class="solr.TextField"> <analyzer type="index"> <tokenizer class="solr.KeywordTokenizerFactory" /> analyzer> <analyzer type="query"> <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" /> analyzer> fieldType> This point type indexes the coordinates as separate fields (subFields) If subFieldType is defined, it references a type, and a dynamic field definition is created matching *___. Alternately, if subFieldSuffix is defined, that is used to create the subFields. Example: if subFieldType="double", then the coordinates would be indexed in fields myloc_0___double,myloc_1___double. Example: if subFieldSuffix="_d" then the coordinates would be indexed in fields myloc_0_d,myloc_1_d The subFields are an implementation detail of the fieldType, and end users normally should not need to know about them. --> <dynamicField name="*_point" type="point" indexed="true" stored="true"/> <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/> A specialized field for geospatial search filters and distance sorting. --> <fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/> A geospatial field type that supports multiValued and polygon shapes. For more information about this and other spatial fields see: http://lucene.apache.org/solr/guide/spatial-search.html --> <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" /> Payloaded field types --> <fieldType name="delimited_payloads_float" stored="false" indexed="true" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="float"/> analyzer> fieldType> <fieldType name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="integer"/> analyzer> fieldType> <fieldType name="delimited_payloads_string" stored="false" indexed="true" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="identity"/> analyzer> fieldType> some examples for different languages (generally ordered by ISO code) --> Arabic --> <dynamicField name="*_txt_ar" type="text_ar" indexed="true" stored="true"/> <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> for any non-arabic --> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ar.txt" /> normalizes ﻯ to ﻱ, etc --> <filter class="solr.ArabicNormalizationFilterFactory"/> <filter class="solr.ArabicStemFilterFactory"/> analyzer> fieldType> Bulgarian --> <dynamicField name="*_txt_bg" type="text_bg" indexed="true" stored="true"/> <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_bg.txt" /> <filter class="solr.BulgarianStemFilterFactory"/> analyzer> fieldType> Catalan --> <dynamicField name="*_txt_ca" type="text_ca" indexed="true" stored="true"/> <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> removes l', etc --> <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ca.txt"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ca.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Catalan"/> analyzer> fieldType> CJK bigram (see text_ja for a Japanese configuration using morphological analysis) --> <dynamicField name="*_txt_cjk" type="text_cjk" indexed="true" stored="true"/> <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> normalize width before bigram, as e.g. half-width dakuten combine --> <filter class="solr.CJKWidthFilterFactory"/> for any non-CJK --> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.CJKBigramFilterFactory"/> analyzer> fieldType> Czech --> <dynamicField name="*_txt_cz" type="text_cz" indexed="true" stored="true"/> <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_cz.txt" /> <filter class="solr.CzechStemFilterFactory"/> analyzer> fieldType> Danish --> <dynamicField name="*_txt_da" type="text_da" indexed="true" stored="true"/> <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Danish"/> analyzer> fieldType> German --> <dynamicField name="*_txt_de" type="text_de" indexed="true" stored="true"/> <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" /> <filter class="solr.GermanNormalizationFilterFactory"/> <filter class="solr.GermanLightStemFilterFactory"/> less aggressive:--> more aggressive:--> analyzer> fieldType> Greek --> <dynamicField name="*_txt_el" type="text_el" indexed="true" stored="true"/> <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> greek specific lowercase for sigma --> <filter class="solr.GreekLowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_el.txt" /> <filter class="solr.GreekStemFilterFactory"/> analyzer> fieldType> Spanish --> <dynamicField name="*_txt_es" type="text_es" indexed="true" stored="true"/> <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" /> <filter class="solr.SpanishLightStemFilterFactory"/> more aggressive:--> analyzer> fieldType> Basque --> <dynamicField name="*_txt_eu" type="text_eu" indexed="true" stored="true"/> <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_eu.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Basque"/> analyzer> fieldType> Persian --> <dynamicField name="*_txt_fa" type="text_fa" indexed="true" stored="true"/> <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100"> <analyzer> for ZWNJ --> <charFilter class="solr.PersianCharFilterFactory"/> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.ArabicNormalizationFilterFactory"/> <filter class="solr.PersianNormalizationFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fa.txt" /> analyzer> fieldType> Finnish --> <dynamicField name="*_txt_fi" type="text_fi" indexed="true" stored="true"/> <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Finnish"/> less aggressive:--> analyzer> fieldType> French --> <dynamicField name="*_txt_fr" type="text_fr" indexed="true" stored="true"/> <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> removes l', etc --> <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_fr.txt"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" /> <filter class="solr.FrenchLightStemFilterFactory"/> less aggressive:--> more aggressive:--> analyzer> fieldType> Irish --> <dynamicField name="*_txt_ga" type="text_ga" indexed="true" stored="true"/> <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> removes d', etc --> <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ga.txt"/> removes n-, etc. position increments is intentionally false! --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/hyphenations_ga.txt"/> <filter class="solr.IrishLowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ga.txt"/> <filter class="solr.SnowballPorterFilterFactory" language="Irish"/> analyzer> fieldType> Galician --> <dynamicField name="*_txt_gl" type="text_gl" indexed="true" stored="true"/> <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_gl.txt" /> <filter class="solr.GalicianStemFilterFactory"/> less aggressive:--> analyzer> fieldType> Hindi --> <dynamicField name="*_txt_hi" type="text_hi" indexed="true" stored="true"/> <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> normalizes unicode representation --> <filter class="solr.IndicNormalizationFilterFactory"/> normalizes variation in spelling --> <filter class="solr.HindiNormalizationFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hi.txt" /> <filter class="solr.HindiStemFilterFactory"/> analyzer> fieldType> Hungarian --> <dynamicField name="*_txt_hu" type="text_hu" indexed="true" stored="true"/> <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Hungarian"/> less aggressive:--> analyzer> fieldType> Armenian --> <dynamicField name="*_txt_hy" type="text_hy" indexed="true" stored="true"/> <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hy.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Armenian"/> analyzer> fieldType> Indonesian --> <dynamicField name="*_txt_id" type="text_id" indexed="true" stored="true"/> <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_id.txt" /> for a less aggressive approach (only inflectional suffixes), set stemDerivational to false --> <filter class="solr.IndonesianStemFilterFactory" stemDerivational="true"/> analyzer> fieldType> Italian --> <dynamicField name="*_txt_it" type="text_it" indexed="true" stored="true"/> <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> removes l', etc --> <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_it.txt"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" /> <filter class="solr.ItalianLightStemFilterFactory"/> more aggressive:--> analyzer> fieldType> Japanese using morphological analysis (see text_cjk for a configuration using bigramming) NOTE: If you want to optimize search for precision, use default operator AND in your request handler config (q.op) Use OR if you would like to optimize for recall (default). --> <dynamicField name="*_txt_ja" type="text_ja" indexed="true" stored="true"/> <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false"> <analyzer> Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer) Kuromoji has a search mode (default) that does segmentation useful for search. A heuristic is used to segment compounds into its parts and the compound itself is kept as synonym. Valid values for attribute mode are: normal: regular segmentation search: segmentation useful for search with synonyms compounds (default) extended: same as search mode, but unigrams unknown words (experimental) For some applications it might be good to use search mode for indexing and normal mode for queries to reduce recall and prevent parts of compounds from being matched and highlighted. Useandfor this and mode normal in query. Kuromoji also has a convenient user dictionary feature that allows overriding the statistical model with your own entries for segmentation, part-of-speech tags and readings without a need to specify weights. Notice that user dictionaries have not been subject to extensive testing. User dictionary attributes are: userDictionary: user dictionary filename userDictionaryEncoding: user dictionary encoding (default is UTF-8) See lang/userdict_ja.txt for a sample user dictionary file. Punctuation characters are discarded by default. Use discardPunctuation="false" to keep them. --> <tokenizer class="solr.JapaneseTokenizerFactory" mode="search"/> --> Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) --> <filter class="solr.JapaneseBaseFormFilterFactory"/> Removes tokens with certain part-of-speech tags --> <filter class="solr.JapanesePartOfSpeechStopFilterFactory" tags="lang/stoptags_ja.txt" /> Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) --> <filter class="solr.CJKWidthFilterFactory"/> Removes common tokens typically not useful for search, but have a negative effect on ranking --> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ja.txt" /> Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) --> <filter class="solr.JapaneseKatakanaStemFilterFactory" minimumLength="4"/> Lower-cases romaji characters --> <filter class="solr.LowerCaseFilterFactory"/> analyzer> fieldType> Korean morphological analysis --> <dynamicField name="*_txt_ko" type="text_ko" indexed="true" stored="true"/> <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100"> <analyzer> Nori Korean morphological analyzer/tokenizer (KoreanTokenizer) The Korean (nori) analyzer integrates Lucene nori analysis module into Solr. It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts. This dictionary was built with MeCab, it defines a format for the features adapted for the Korean language. Nori also has a convenient user dictionary feature that allows overriding the statistical model with your own entries for segmentation, part-of-speech tags and readings without a need to specify weights. Notice that user dictionaries have not been subject to extensive testing. The tokenizer supports multiple schema attributes: * userDictionary: User dictionary path. * userDictionaryEncoding: User dictionary encoding. * decompoundMode: Decompound mode. Either 'none', 'discard', 'mixed'. Default is 'discard'. * outputUnknownUnigrams: If true outputs unigrams for unknown words. --> <tokenizer class="solr.KoreanTokenizerFactory" decompoundMode="discard" outputUnknownUnigrams="false"/> Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags', listing the tags to remove. By default it removes: E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV This is basically an equivalent to stemming. --> <filter class="solr.KoreanPartOfSpeechStopFilterFactory" /> Replaces term text with the Hangul transcription of Hanja characters, if applicable: --> <filter class="solr.KoreanReadingFormFilterFactory" /> <filter class="solr.LowerCaseFilterFactory" /> analyzer> fieldType> Latvian --> <dynamicField name="*_txt_lv" type="text_lv" indexed="true" stored="true"/> <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_lv.txt" /> <filter class="solr.LatvianStemFilterFactory"/> analyzer> fieldType> Dutch --> <dynamicField name="*_txt_nl" type="text_nl" indexed="true" stored="true"/> <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" /> <filter class="solr.StemmerOverrideFilterFactory" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/> <filter class="solr.SnowballPorterFilterFactory" language="Dutch"/> analyzer> fieldType> Norwegian --> <dynamicField name="*_txt_no" type="text_no" indexed="true" stored="true"/> <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Norwegian"/> less aggressive:--> singular/plural:--> analyzer> fieldType> Portuguese --> <dynamicField name="*_txt_pt" type="text_pt" indexed="true" stored="true"/> <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" /> <filter class="solr.PortugueseLightStemFilterFactory"/> less aggressive:--> more aggressive:--> most aggressive:--> analyzer> fieldType> Romanian --> <dynamicField name="*_txt_ro" type="text_ro" indexed="true" stored="true"/> <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ro.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Romanian"/> analyzer> fieldType> Russian --> <dynamicField name="*_txt_ru" type="text_ru" indexed="true" stored="true"/> <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Russian"/> less aggressive:--> analyzer> fieldType> Swedish --> <dynamicField name="*_txt_sv" type="text_sv" indexed="true" stored="true"/> <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" /> <filter class="solr.SnowballPorterFilterFactory" language="Swedish"/> less aggressive:--> analyzer> fieldType> Thai --> <dynamicField name="*_txt_th" type="text_th" indexed="true" stored="true"/> <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.ThaiTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_th.txt" /> analyzer> fieldType> Turkish --> <dynamicField name="*_txt_tr" type="text_tr" indexed="true" stored="true"/> <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.TurkishLowerCaseFilterFactory"/> <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_tr.txt" /> <filter class="solr.SnowballPorterFilterFactory" language="Turkish"/> analyzer> fieldType> Similarity is the scoring routine for each document vs. a query. A custom Similarity or SimilarityFactory may be specified here, but the default is fine for most applications. For more info: http://lucene.apache.org/solr/guide/other-schema-elements.html#OtherSchemaElements-Similarity --> param value--> ik分词器 --> <fieldType name="text_ik" class="solr.TextField"> <analyzer type="index"> <tokenizer class="org.wltea.analyzer.lucene.IKTokenizerFactory" useSmart="false" conf="ik.conf"/> <filter class="solr.LowerCaseFilterFactory"/> analyzer> <analyzer type="query"> <tokenizer class="org.wltea.analyzer.lucene.IKTokenizerFactory" useSmart="true" conf="ik.conf"/> <filter class="solr.LowerCaseFilterFactory"/> analyzer> fieldType> solr默认的中文分词器 --> <fieldType name="text_smartcn" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="org.apache.lucene.analysis.cn.smart.HMMChineseTokenizerFactory"/> analyzer> <analyzer type="query"> <tokenizer class="org.apache.lucene.analysis.cn.smart.HMMChineseTokenizerFactory"/> analyzer> fieldType>schema>
View Code
然后修改dataimport.properties文件
#Mon Oct 14 08:38:57 UTC 2019 interval=1 port=8983 server=192.168.75.49 params=/dataimport?command=delta-import&clean=false&commit=true webapp=solr reBuildIndexInterval=7200 syncEnabled=1 last_index_time=2019-10-14 08:38:57 reBuildIndexBeginTime=03:10:00 reBuildIndexParams=/dataimport?command=full-import&clean=true&commit=true syncCores=core1 tj_student.last_index_time=2019-10-14 08:38:57
View Code
然后重启solr服务器。
调用增量更新接口:
http://192.168.88.49:8983/solr/core1/dataimport?command=delta-import&clean=false&commit=true
结果如下:
以上,就是导入数据到solr的方法,祝大家国庆节快乐!