Extending the schema.xml file using the x-schema.xml file

The schema.xml file defines the default index schema structure. The default index schema can be extended using a separate XML file. The most common customizations of the index schema is defining new index fields.

Note: You cannot directly modify or extend the schema.xml file. Instead, you can work with the customizable x-schema.xml file to extend it.

Procedure

Open the customizable search index schema file:
  • WebSphere Commerce DeveloperWCDE_installdir\search\solr\home\masterCatalogId\en_US\Catalogentry\conf\x-schema.xml
  • LinuxAIXWindowsWC_installdir/instances/instance_name/search/solr/home/masterCatalogId/en_US/Catalogentry/conf/x-schema.xml
This directory contains the Master Catalog folder, in which there are the configurations files for each language.
  • You can update the customizable search index schema file directly. For example, by updating the x-schema.xml file instead of changing the default schema.xml files. In the x-schema.xml file, you can do the following customizations:
    1. Define new index fields which can refer to any base field types or custom field types.
      For example:
      
      <field name="x_name" type="x_text" indexed="true" stored="true"  multiValued="false"/>
      
      Where a prefix of x_ is used to avoid conflicts with the base schema artifacts. Then, using the following naming convention as a suffix:
      fieldName
      Tokenized and not case sensitive for example, mfName.
      fieldName_cs
      Tokenized and case sensitive for example, mfName_cs.
      fieldName_ntk
      Non-tokenized and not case sensitive for example, mfName_ntk.
      fieldName_ntk_cs
      Non-tokenized and case sensitive for example, catenttype_id_ntk_cs.
      Note: Base fields cannot be altered or removed.
    2. Define new index field types.
      For example:
      <fieldType name="x_text" class="solr.TextField" positionIncrementGap="100" omitNorms="true">
        <analyzer type="index">
      	<tokenizer class="solr.WhitespaceTokenizerFactory"/>
      	<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
      	<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" 
                            catenateNumbers="1" catenateAll="0" splitOnCaseChange="0" preserveOriginal="1"/>
      	<filter class="solr.LowerCaseFilterFactory"/>	
      	<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
        </analyzer>
        <analyzer type="query">
      	<tokenizer class="solr.WhitespaceTokenizerFactory"/>
      	<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
      	<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" 
                            catenateNumbers="0" catenateAll="0" splitOnCaseChange="0" preserveOriginal="1"/>
      	<filter class="solr.LowerCaseFilterFactory"/>	
      	<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
        </analyzer>
      </fieldType>
      

      You should have advanced knowledge of analyzers and tokenizers when defining new field types. It is recommended that you follow the general recommendations for Solr when setting field types. For example, using tokenized fields for search, and untokenized fields for sorting or faceting.

    3. Define new copy field statements, which can refer to any base or custom fields.
      For example:
      
      <copyField source="x_name" dest="defaultSearch"/> 
      
  • Save the file and restart the search server.