elasticsearch-rest-high-level-client操作elasticsearch

Posted 可——叹——落叶飘零

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了elasticsearch-rest-high-level-client操作elasticsearch相关的知识,希望对你有一定的参考价值。

文章目录

摘要

elasticsearch-rest-high-level-client操作elasticsearch
闲的无聊,于是写了这一篇爽文,米娜桑可直接用,除非几乎不可能有bug,有bug当我没说(doge)
QA:无想的一刀欧为啥不用springboot封装的操作依赖涅?
欧认为springboot对操作类过度封装,实现普通简单操作还行,但是涉及到较为复杂的操作时,难以使用,尤其是不同版本的springboot推出的api变化频繁,更加难以使用,es官方推出的api更新不会让操作类变化太频繁,个人感觉spboot操作不如es官方推出的api灵活强大,之前在工作中遇到的需求使用springboot提供的报错难以琢磨,且难以满足需求,所以使用了官方api
elasticsearch版本:7.4
安装操作文档:https://blog.csdn.net/UnicornRe/article/details/121747039?spm=1001.2014.3001.5501

依赖

依赖最好保持与es版本一致,如果以下依赖报错,在maven < parent > 同级标签旁加上

<properties>
        <java.version>1.8</java.version>
        <!-- <spring-cloud.version>2020.0.2</spring-cloud.version> -->
        <!--解决版本问题-->
        <elasticsearch.version>7.4.0</elasticsearch.version>
</properties>
<!--elasticsearch-->
<dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-high-level-client</artifactId>
            <version>7.4.0</version>
</dependency>
<dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>7.4.0</version>
</dependency>

yml配置

可自行修改配置和代码增加多台es机器,address逗号隔开

elasticsearch:
  schema: http
  address: 192.168.52.43:9200
  connectTimeout: 5000
  socketTimeout: 5000
  connectionRequestTimeout: 5000
  maxConnectNum: 100
  maxConnectPerRoute: 100

连接配置

import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Scope;
import java.time.Duration;
import java.util.ArrayList;
import java.util.List;

@Configuration
public class EsHighLevalConfigure 
    //协议 
    @Value("$elasticsearch.schema:http")
    private String schema="http";
    // 集群地址,如果有多个用“,”隔开 
    @Value("$elasticsearch.address")
    private String address;
    // 连接超时时间 
    @Value("$elasticsearch.connectTimeout:5000")
    private int connectTimeout;
    // Socket 连接超时时间 
    @Value("$elasticsearch.socketTimeout:10000")
    private int socketTimeout;
    // 获取连接的超时时间 
    @Value("$elasticsearch.connectionRequestTimeout:5000")
    private int connectionRequestTimeout;
    // 最大连接数 
    @Value("$elasticsearch.maxConnectNum:100")
    private int maxConnectNum;
    // 最大路由连接数 
    @Value("$elasticsearch.maxConnectPerRoute:100")
    private int maxConnectPerRoute;

    @Bean
    public static RestHighLevelClient restHighLevelClient() 
        List<HttpHost> hostLists = new ArrayList<>();
        String[] hostList = address.split(",");
        for (String addr : hostList) 
            String host = addr.split(":")[0];
            String port = addr.split(":")[1];
            hostLists.add(new HttpHost(host, Integer.parseInt(port), schema));
        
        HttpHost[] httpHost = hostLists.toArray(new HttpHost[]);
        // 构建连接对象
        RestClientBuilder builder = RestClient.builder(httpHost);
        // 异步连接延时配置
        builder.setRequestConfigCallback(requestConfigBuilder -> 
            requestConfigBuilder.setConnectTimeout(connectTimeout);
            requestConfigBuilder.setSocketTimeout(socketTimeout);
            requestConfigBuilder.setConnectionRequestTimeout(connectionRequestTimeout);
            return requestConfigBuilder;
        );
        // 异步连接数配置
        builder.setHttpClientConfigCallback(httpClientBuilder -> 
            httpClientBuilder.setMaxConnTotal(maxConnectNum);
            httpClientBuilder.setMaxConnPerRoute(maxConnectPerRoute);
            httpClientBuilder.setKeepAliveStrategy((response, context) -> Duration.ofMinutes(5).toMillis());
            return httpClientBuilder;
        );
        return new RestHighLevelClient(builder);
    

索引结构

虽然索引结构肯定不是和你们一样的,但是代码结构不需要伤经动骨,
我来简单说说这个结构吧,一条知识产权信息內包含n个文档annex,包含n个(申请人发明人)applicant,
所以使用了 “type”: “nested"嵌套类型,不晓得与"type”: "object"区别的小伙伴自行学习吧,这里就不多说了。
想要学习部分优化的,安装,数据迁移冷备份的可以看看我的文章:(东西太多,部分就没写)https://blog.csdn.net/UnicornRe/article/details/121747039?spm=1001.2014.3001.5501

PUT /intellectual

  "settings": 
    "number_of_shards": 1,
    "number_of_replicas": 1
  

 PUT /intellectual/_mapping

        "properties": 
            "id": 
            "type": "long"
            ,
            "name": 
            "type": "text",
            "analyzer": "ik_max_word",
            "search_analyzer": "ik_smart"
            ,
            "type": 
            "type": "keyword"
            ,
            "keycode": 
            "type": "text",
             "analyzer": "ik_max_word",
             "search_analyzer": "ik_smart"
            ,
            "officeId": 
            "type": "keyword"
            ,
            "officeName": 
            "type": "keyword"
            ,
            "titular": 
            "type": "keyword"
            ,
            "applyTime": 
            "type": "long"
            ,
            "endTime": 
            "type": "long"
            ,
            "status": 
            "type": "keyword"
            ,
            "agentName": 
             "type": "text",
             "analyzer": "ik_smart",
             "search_analyzer": "ik_smart"
            ,
            "annex": 
                "type": "nested",
                "properties": 
                    "id": 
                    "type": "long"
                    ,
                    "name": 
                     "type": "text",
                     "analyzer": "ik_max_word",
                     "search_analyzer": "ik_smart"
                    ,
                    "content": 
                     "type": "text",
                      "analyzer": "ik_max_word",
                      "search_analyzer": "ik_max_word"
                       ,
                    "createTime": 
                        "type": "long"
                    
                
            ,
            "applicant": 
                    "type": "nested",
                    "properties": 
                                "id": 
                                "type": "long"
                                ,
                                "applicantId": 
                                 "type": "long"
                                ,
                                "isOffice": 
                                  "type": "integer"
                                ,
                                "userName": 
                                 "type": "text",
                                 "analyzer": "ik_max_word",
                                 "search_analyzer": "ik_smart"
                                ,
                                "outUsername": 
                                     "type": "text",
                                     "analyzer": "ik_max_word",
                                     "search_analyzer": "ik_smart"
                                
                    
             
        

普通常见非嵌套结构的CRUD

先不管"type": "nested"嵌套的对象,只对普通字段操作
我先定义一个实体类IntellectualEntity字段和上面的mapping一致
所有操作都注入了RestHighLevelClient restHighLevelClient

新增

public void insertIntel(IntellectualEntity intellectualEntity) throws IOException 
        //intellectual为索引名
        IndexRequest indexRequest = new IndexRequest("intellectual")
        .source(JSON.toJSONString(intellectualEntity), XContentType.JSON)
        .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
        .id(intellectualEntity.getId()+"");//手动指定es文档的id
        IndexResponse out = restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
        log.info("状态:", out.status());
    

更新(根据id更新)

只会更新entity不为空的字段,如同mybatisplus默认自带的update
因为es文档的id一定唯一,所以方法最多只能更新一条

public void updateIntel(IntellectualEntity entity) throws IOException 
        //根据IntellectualEntity的id更新文档
        UpdateRequest updateRequest = new UpdateRequest("intellectual", entity.getId()+"");
        byte[] json = JSON.toJSONBytes(entity);
        updateRequest.doc(json, XContentType.JSON);
        UpdateResponse response = restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT);
        log.info("状态:", response.status());
    

更新(高级,根据搜索条件更新,采用无痛painless脚本)

painless脚本适用很多业务复杂的场合,比如如下更新值字段为map里的字段

private void updateByQuery(IntellectualEntity entity) throws IOException 
        UpdateByQueryRequest updateByQueryRequest = new UpdateByQueryRequest();
        updateByQueryRequest.indices("intellectual");
        //搜索条件为id(因为插入时指定doc的id和实体类id一致,这样就保证了搜索结果唯一)
        //如果搜索条件查出的结果很多,使用需谨慎
        updateByQueryRequest.setQuery(new TermQueryBuilder("id", entity.getId()));
        //map存储脚本实体参数值
        Map<String,Object> map=new HashMap<>();
        map.put("intelName", entity.getName());
        map.put("intelStatus", entity.getStatus());
        map.put("intelApplyTime", entity.getApplyTime());
        map.put("intelKeyCode", entity.getKeycode());
        map.put("intelEndTime", entity.getEndTime());
        map.put("intelType", entity.getType());
        map.put("intelTitular", entity.getTitular());
        //指定哪些字段需要更新,ctx._source.xxx为es的字段,使用map的值赋值更新
        updateByQueryRequest.setScript(new Script(ScriptType.INLINE,
                "painless",
                "ctx._source.intelName=params.intelName;" +
                        "ctx._source.intelStatus=params.intelStatus;"+
                        "ctx._source.intelApplyTime=params.intelApplyTime;"+
                        "ctx._source.intelKeyCode=params.intelKeyCode;"+
                        "ctx._source.intelType=params.intelType;"+
                        "ctx._source.intelTitular=params.intelTitular;"
                , map));
        BulkByScrollResponse bulkByScrollResponse = restHighLevelClient.updateByQuery(updateByQueryRequest, RequestOptions.DEFAULT);
        log.info("创建状态:", bulkByScrollResponse.getStatus());
    

删除

public void deleteIntel(IntellectualEntity entity) throws IOException 
        DeleteRequest deleteRequest=new DeleteRequest("intellectual",entity.getId()+"");
        DeleteResponse deleteResponse = restHighLevelClient.delete(deleteRequest, RequestOptions.DEFAULT);
        log.info("状态:", deleteResponse.status());
    

删除(根据搜索条件删除)

和更新搜索条件操作类似,结合删除操作替换DeleteRequestDeleteByQueryRequest,相信机智的你已经会了

搜索高亮(普通高亮,空格多条件搜索)

这块代码暂时不涉及nested的字段的嵌套高亮
条件设置时,should=or,must=and
步骤:设置高亮构造器->搜索出结果->将高亮数据替换掉非高亮数据->返回结果
先写一个高亮构造器吧
高亮构造器:

private static void HighlightBuilder highlightBuilder;
    static 
        highlightBuilder = new HighlightBuilder();
        highlightBuilder.numOfFragments(0);//从第一个分片获取高亮片段
        highlightBuilder.preTags("<font color='#e75213'>");//自定义高亮标签
        highlightBuilder.postTags("</font>");
        highlightBuilder.highlighterType("unified");//高亮类型
        highlightBuilder
                .field("name")//需要高亮的属性值
                .field("keycode")
        ;
        highlightBuilder.requireFieldMatch(false);
    

搜索步骤:

public List<Map<String,Object>>  queryByContent(String content,Integer pageCurrent, Date startTimeApply,Date endTimeApply,Date startTimeEnd,Date endTimeEnd ) throws IOException 
        //空格分割多条件,本搜索支持多搜索词条空格分开,多词条搜索关系用and
        String[] manyStr = content.split("\\\\s+");
        //定义一个list<map>作为返回结果
        List<Map<String,Object>> list = new LinkedList<>();
        //首先构造条件构造器
        BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
        if(manyStr.length>1)
                for (int i=0;i<manyStr.length;i++)
                	BoolQueryBuilder innerBoolQueryBuilder = QueryBuilders.boolQuery();
                	//nestedQuery,嵌套搜索条件
                    innerBoolQueryBuilder.should(QueryBuilders.nestedQuery("annex",QueryBuilders.matchQuery("annex.content", manyStr[i]) , ScoreMode.Max).boost(2));
                    innerBoolQueryBuilder

(c)2006-2024 SYSTEM All Rights Reserved IT常识