Extending the schema.xml file using the x-schema.xml file

The schema.xml file defines the default index schema structure. The default index schema can be extended using a separate XML file. The most common customizations of the index schema is defining new index fields.

You cannot directly modify or extend the schema.xml file. Instead, you work with the customizable x-schema.xml file.

Solr field types are defined in the file workspace_dir\search-config-ext\src\index\managed-solr\config\v3\common\x-schema-field-types.xml. Using the field definitions in this file as your template, you will modify the field and field type mappings in the file search-config-ext\src\index\managed-solr\config\v3\indextype\x-schema.xml (where indextype can be one of CatalogEntry, CatalogGroup, Price, or Unstructured).

Procedure

Open the customizable search index schema file, search-config-ext\src\index\managed-solr\config\v3\indextype\x-schema.xml.
  • In the x-schema.xml file, you can do the following customizations:
    1. Define new index fields which can refer to any base field types or custom field types.
      For example:
      
      <field name="x_name" type="x_text" indexed="true" stored="true"  multiValued="false"/>
      
      Where a prefix of x_ is used to avoid conflicts with the base schema artifacts. Then, using the following naming convention as a suffix:
      fieldName
      Tokenized and not case sensitive for example, mfName.
      fieldName_cs
      Tokenized and case sensitive for example, mfName_cs.
      fieldName_ntk
      Non-tokenized and not case sensitive for example, mfName_ntk.
      fieldName_ntk_cs
      Non-tokenized and case sensitive for example, catenttype_id_ntk_cs.
      Note: Base fields cannot be altered or removed.
    2. Define new index field types.
      For example:
      <fieldType name="x_text" class="solr.TextField" positionIncrementGap="100" omitNorms="true">
        <analyzer type="index">
      	<tokenizer class="solr.WhitespaceTokenizerFactory"/>
      	<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
      	<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" 
                            catenateNumbers="1" catenateAll="0" splitOnCaseChange="0" preserveOriginal="1"/>
      	<filter class="solr.LowerCaseFilterFactory"/>	
      	<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
        </analyzer>
        <analyzer type="query">
      	<tokenizer class="solr.WhitespaceTokenizerFactory"/>
      	<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
      	<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" 
                            catenateNumbers="0" catenateAll="0" splitOnCaseChange="0" preserveOriginal="1"/>
      	<filter class="solr.LowerCaseFilterFactory"/>	
      	<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
        </analyzer>
      </fieldType>
      

      You should have advanced knowledge of analyzers and tokenizers when defining new field types. It is recommended that you follow the general recommendations for Solr when setting field types. For example, using tokenized fields for search, and untokenized fields for sorting or faceting.

    3. Define new copy field statements, which can refer to any base or custom fields.
      For example:
      
      <copyField source="x_name" dest="defaultSearch"/> 
      
  • Save the file and restart the search server.