当前位置: 首页 > news >正文

springBoot 整合 Hazelcast 作为缓存中间件

Hazelcast 介绍

Hazelcast ( www.hazelcast.com)是一种内存数据网格 in-memory data grid,提供Java程序员关键任务交易和万亿级内存应用。

Hazelcast的集群属于“无主节点”,这意味着它不是一个客户端 - 服务器系统。有一个集群的领导者,默认是最老的成员,管理数据是如何在系统间分布,但是,如果该节点当机,那么是下面一个旧的节点接管。

你所用的数据结构Maps List和队列都是保存在内存中。如果集群中的一个节点死亡,数据不会丢失,但如果多个节点同时当机,那么你就麻烦了。


不仅仅作为数据缓存使用,目的是内存数据网格,定位更高,功能更丰富的。

优点:  

数据结构更多 ,功能更丰富。

集群更方便,管理界面友好。

spring整合更方便。


分为开源版和企业版,企业版气功技术支持。

特点:

完整的IMDG特性(内存数据网格)。

实现了java 的缓存规范。

基于Apache 2 Licenese(开源的)。

只是一个小的jar包(依赖非常少,轻量)。

同时支持嵌入式和 分布式服务端开发。

支持多种语言的客户端。


功能:

内存数据网格,缓存,微服务,session集群,消息系统,内存中的NoSql,应用的伸缩。

由于面向人群多,覆盖面广,功能丰富,所以还是建议把官方文档通读一遍,这里就挑关于java缓存方面作为记录:

1.数据分区:

 默认把一块个节点的内存分为271个小格,

  当有1个节点的时候 ,所有小分区都用来存储,如果宕机则全体GG,没有备份;

  当有两个节点的时候,每个节点有一半的用来保存数据,另一半备份数据, 所有一个宕机了另一个还有备份,数据比一个节点安全。


节点的增多,内存容量大,保存的数据就更多了。

2.分布式数据结构:

  在Hazelcast里所有的数据结构都是分布式的,把多个节点当作一个节点使用,最常用的是map,其它的有Queue(队列),MultiMap,Set,List,跟java里的是一样的,因为实现了java里的泛型接口。  

  支持发布订阅模式的Topic,支持分布式锁,支持集成支持多线程编程环境,分布式事件具体参阅文档。

安装

这里示例嵌入式安装:


1. 引入依赖:

这里使用的gradle方式引入

hazelcast:[
        'com.hazelcast:hazelcast:3.8.6',
        'com.hazelcast:hazelcast-spring:3.8.6',
]复制代码


2.需要去官网下载管理中心工具 :  https://hazelcast.org/download/

3.等依赖引入后,在hazelcast  的jar包目录中找到示例.xml文件复制到  项目的resources目录中,改名hazelcast.xml


该xml文件的具体内容总览:

<?xml version="1.0" encoding="UTF-8"?>
<!--
  ~ Copyright (c) 2008-2017, Hazelcast, Inc. All Rights Reserved.
  ~
  ~ Licensed under the Apache License, Version 2.0 (the "License");
  ~ you may not use this file except in compliance with the License.
  ~ You may obtain a copy of the License at
  ~
  ~ http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->

<!--
    The default Hazelcast configuration. This is used when no hazelcast.xml is present.
    Please see the schema for how to configure Hazelcast at https://hazelcast.com/schema/config/hazelcast-config-3.8.xsd
    or the documentation at https://hazelcast.org/documentation/
-->
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.8.xsd"
           xmlns="http://www.hazelcast.com/schema/config"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <group>
        <name>dev</name>
        <password>dev-pass</password>
    </group>
    <management-center enabled="true">http://localhost:8080/mancenter</management-center>
    <network>
        <port auto-increment="true" port-count="100">5701</port>
        <outbound-ports>
            <!--
            Allowed port range when connecting to other nodes.
            0 or * means use system provided port.
            -->
            <ports>0</ports>
        </outbound-ports>
        <join>
            <multicast enabled="true">
                <multicast-group>224.2.2.3</multicast-group>
                <multicast-port>54327</multicast-port>
            </multicast>
            <tcp-ip enabled="false">
                <interface>127.0.0.1</interface>
                <member-list>
                    <member>127.0.0.1</member>
                </member-list>
            </tcp-ip>
            <aws enabled="false">
                <access-key>my-access-key</access-key>
                <secret-key>my-secret-key</secret-key>
                <!--optional, default is us-east-1 -->
                <region>us-west-1</region>
                <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
                <host-header>ec2.amazonaws.com</host-header>
                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
                <security-group-name>hazelcast-sg</security-group-name>
                <tag-key>type</tag-key>
                <tag-value>hz-nodes</tag-value>
            </aws>
            <discovery-strategies>
            </discovery-strategies>
        </join>
        <interfaces enabled="false">
            <interface>10.10.1.*</interface>
        </interfaces>
        <ssl enabled="false"/>
        <socket-interceptor enabled="false"/>
        <symmetric-encryption enabled="false">
            <!--
               encryption algorithm such as
               DES/ECB/PKCS5Padding,
               PBEWithMD5AndDES,
               AES/CBC/PKCS5Padding,
               Blowfish,
               DESede
            -->
            <algorithm>PBEWithMD5AndDES</algorithm>
            <!-- salt value to use when generating the secret key -->
            <salt>thesalt</salt>
            <!-- pass phrase to use when generating the secret key -->
            <password>thepass</password>
            <!-- iteration count to use when generating the secret key -->
            <iteration-count>19</iteration-count>
        </symmetric-encryption>
    </network>
    <partition-group enabled="false"/>
    <executor-service name="default">
        <pool-size>16</pool-size>
        <!--Queue capacity. 0 means Integer.MAX_VALUE.-->
        <queue-capacity>0</queue-capacity>
    </executor-service>
    <queue name="default">
        <!--
            Maximum size of the queue. When a JVM's local queue size reaches the maximum,
            all put/offer operations will get blocked until the queue size
            of the JVM goes down below the maximum.
            Any integer between 0 and Integer.MAX_VALUE. 0 means
            Integer.MAX_VALUE. Default is 0.
        -->
        <max-size>0</max-size>
        <!--
            Number of backups. If 1 is set as the backup-count for example,
            then all entries of the map will be copied to another JVM for
            fail-safety. 0 means no backup.
        -->
        <backup-count>1</backup-count>

        <!--
            Number of async backups. 0 means no backup.
        -->
        <async-backup-count>0</async-backup-count>

        <empty-queue-ttl>-1</empty-queue-ttl>
    </queue>
    <map name="default">
        <!--
           Data type that will be used for storing recordMap.
           Possible values:
           BINARY (default): keys and values will be stored as binary data
           OBJECT : values will be stored in their object forms
           NATIVE : values will be stored in non-heap region of JVM
        -->
        <in-memory-format>BINARY</in-memory-format>

        <!--
            Number of backups. If 1 is set as the backup-count for example,
            then all entries of the map will be copied to another JVM for
            fail-safety. 0 means no backup.
        -->
        <backup-count>1</backup-count>
        <!--
            Number of async backups. 0 means no backup.
        -->
        <async-backup-count>0</async-backup-count>
        <!--
         Maximum number of seconds for each entry to stay in the map. Entries that are
         older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
         will get automatically evicted from the map.
         Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
      -->
        <time-to-live-seconds>0</time-to-live-seconds>
        <!--
         Maximum number of seconds for each entry to stay idle in the map. Entries that are
         idle(not touched) for more than <max-idle-seconds> will get
         automatically evicted from the map. Entry is touched if get, put or containsKey is called.
         Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
      -->
        <max-idle-seconds>0</max-idle-seconds>
        <!--
            Valid values are:
            NONE (no eviction),
            LRU (Least Recently Used),
            LFU (Least Frequently Used).
            NONE is the default.
        -->
        <eviction-policy>NONE</eviction-policy>
        <!--
            Maximum size of the map. When max size is reached,
            map is evicted based on the policy defined.
            Any integer between 0 and Integer.MAX_VALUE. 0 means
            Integer.MAX_VALUE. Default is 0.
        -->
        <max-size policy="PER_NODE">0</max-size>
        <!--
            `eviction-percentage` property is deprecated and will be ignored when it is set.

            As of version 3.7, eviction mechanism changed.
            It uses a probabilistic algorithm based on sampling. Please see documentation for further details
        -->
        <eviction-percentage>25</eviction-percentage>
        <!--
            `min-eviction-check-millis` property is deprecated  and will be ignored when it is set.

            As of version 3.7, eviction mechanism changed.
            It uses a probabilistic algorithm based on sampling. Please see documentation for further details
        -->
        <min-eviction-check-millis>100</min-eviction-check-millis>
        <!--
            While recovering from split-brain (network partitioning),
            map entries in the small cluster will merge into the bigger cluster
            based on the policy set here. When an entry merge into the
            cluster, there might an existing entry with the same key already.
            Values of these entries might be different for that same key.
            Which value should be set for the key? Conflict is resolved by
            the policy set here. Default policy is PutIfAbsentMapMergePolicy

            There are built-in merge policies such as
            com.hazelcast.map.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key.
            com.hazelcast.map.merge.PutIfAbsentMapMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
            com.hazelcast.map.merge.HigherHitsMapMergePolicy ; entry with the higher hits wins.
            com.hazelcast.map.merge.LatestUpdateMapMergePolicy ; entry with the latest update wins.
        -->
        <merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy>

        <!--
           Control caching of de-serialized values. Caching makes query evaluation faster, but it cost memory.
           Possible Values:
                        NEVER: Never cache deserialized object
                        INDEX-ONLY: Caches values only when they are inserted into an index.
                        ALWAYS: Always cache deserialized values.
        -->
        <cache-deserialized-values>INDEX-ONLY</cache-deserialized-values>

    </map>

    <multimap name="default">
        <backup-count>1</backup-count>
        <value-collection-type>SET</value-collection-type>
    </multimap>

    <list name="default">
        <backup-count>1</backup-count>
    </list>

    <set name="default">
        <backup-count>1</backup-count>
    </set>

    <jobtracker name="default">
        <max-thread-size>0</max-thread-size>
        <!-- Queue size 0 means number of partitions * 2 -->
        <queue-size>0</queue-size>
        <retry-count>0</retry-count>
        <chunk-size>1000</chunk-size>
        <communicate-stats>true</communicate-stats>
        <topology-changed-strategy>CANCEL_RUNNING_OPERATION</topology-changed-strategy>
    </jobtracker>

    <semaphore name="default">
        <initial-permits>0</initial-permits>
        <backup-count>1</backup-count>
        <async-backup-count>0</async-backup-count>
    </semaphore>

    <reliable-topic name="default">
        <read-batch-size>10</read-batch-size>
        <topic-overload-policy>BLOCK</topic-overload-policy>
        <statistics-enabled>true</statistics-enabled>
    </reliable-topic>

    <ringbuffer name="default">
        <capacity>10000</capacity>
        <backup-count>1</backup-count>
        <async-backup-count>0</async-backup-count>
        <time-to-live-seconds>0</time-to-live-seconds>
        <in-memory-format>BINARY</in-memory-format>
    </ringbuffer>

    <serialization>
        <portable-version>0</portable-version>
    </serialization>

    <services enable-defaults="true"/>

    <lite-member enabled="false"/>

</hazelcast>
复制代码

 可以修改的配置:

<!--分组配置  把不同的节点放进同一个分组作为集群,  项目的分组和密码正确就行了-->
<group>
    <name>dev</name>
    <password>dev-pass</password>
</group>复制代码

<!--管理中心配置  默认等于关闭false,需要改为true来开启配置-->
<management-center enabled="true">http://localhost:8080/mancenter</management-center>复制代码

<!--端口配置 ,这里的配置是  开启端口自动增长true,默认100个端口,从端口5701开始-->
<port auto-increment="true" port-count="100">5701</port>复制代码

<join>
    <!--广播集群地址配置   true开启配置  自动通过广播把分组名和密码一致的相互通知作为集群关联起来  -->
    <multicast enabled="true">
        <multicast-group>224.2.2.3</multicast-group>
        <multicast-port>54327</multicast-port>
    </multicast>
    <!-- 根据tcp方式进行集群配置  ,指定一个地址, 指定其它节点地址,  这些节点作为一个集群  -->
    <tcp-ip enabled="false">
        <interface>127.0.0.1</interface>
        <member-list>
            <member>127.0.0.1</member>
        </member-list>
    </tcp-ip>
    <!--这个很少使用-->
    <aws enabled="false">
        <access-key>my-access-key</access-key>
        <secret-key>my-secret-key</secret-key>
        <!--optional, default is us-east-1 -->
        <region>us-west-1</region>
        <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
        <host-header>ec2.amazonaws.com</host-header>
        <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
        <security-group-name>hazelcast-sg</security-group-name>
        <tag-key>type</tag-key>
        <tag-value>hz-nodes</tag-value>
    </aws>
    <discovery-strategies>
    </discovery-strategies>
</join>复制代码

<!--如果本机有多个ip地址,可以使用本机ip地址配置使用-->
<interfaces enabled="false">
    <interface>10.10.1.*</interface>
</interfaces>复制代码

使用:

这些配置基本不用改可以直接使用,唯一需要改的就是管理中心配置改为true就行了。

也可以根据实际情况单独配置 ,这里就直接启动了 。

在springBoot成功启动后,启动日志中有如下信息就表示启动成功了:


启动成功后就可以进入管理中心查看状态,需要下载官方管理工具包:hazelcast.org/download/

管理端目录结构以及进入管理端的方法如下图:



这里进入的是默认端口路径的管理端,直接双击运行也行

还可以以其它方式进入具体的哪一个管理端  ,比如指定   端口  路径  ,classpath:


第一次进入管理端的时候需要注册管理员用户密码,直接输入记住就行了。

运行完了就会看到管理端的页面:


至此,SpringBoot项目部署Hazelcast就完成了。

接下来做一个简单spring缓存的使用示例。

分为3步:

1.引入spring依赖,   部署的时候已经引入了

2.配置

3.使用  @Cacheable(读取缓存) @CachePut(更新缓存) @CacheEvict(清空缓存)注解 

简单spring缓存使用示例:

1.首先要在springboot启动类上面加上开启缓存功能的注解:


2.在配置文件里指定spring的缓存类型,这里用gradle的  .yml文件作为示例:



3.在需要缓存的service方法上添加@Cacheable注解即可为该方法开启缓存功能:



 注:这里可能会报出一个序列化错误的异常:


原因是数据pojo类存入缓存的时候会序列化,解决办法是需要在pojo类上面实现接口Serializable即可:


再次成功启动即可。

多次发送请求,观察控制台,只有第一次请求有请求日志就说明调用了缓存,缓存功能成功实现,      缓存成功后会在管理端有数据记录:



点击 Map Browser 查看缓存数据:


注:因为spring默认是使用MAP作为缓存格式,   所以当你以ID为参数查询的时候,  就会默认以id为缓存的键。

接下来使用@CachePut(更新缓存) @CacheEvict(清空缓存):

要使用缓存功能多了的话,可以不用在service层里的代码加缓存了, 正确的方式专门做一个类来专门的做缓存功能的类,然后service调用就行了:

缓存功能类:

package com.imooc.seller.service;

import com.imooc.api.ProductRpc;
import com.imooc.entity.Product;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.CachePut;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.stereotype.Component;

@Component
public class ProductCache {
    static final String CACHE_NAME = "imooc_product";

    Logger LOG = LoggerFactory.getLogger(ProductCache.class);

    @Autowired
    private ProductRpc productRpc;

    /**
     * 读取缓存
     * @param id
     * @return
     */
    @Cacheable(cacheNames = CACHE_NAME)
    public Product readCache(String id){
        LOG.info("rpc查询单个产品,请求:{}",id);
        Product result = productRpc.findOne(id);
        LOG.info("rpc查询单个产品,结果“{}",result);
        return result;
    }

    /**
     * 更新缓存
     * @param product
     * @return
     */
    @CachePut(cacheNames = CACHE_NAME, key = "#product.id")
    public Product putCache(Product product){
        return product;
    }

    /**
     * 清除缓存
     * @param id
     */
    @CacheEvict(cacheNames = CACHE_NAME)
    public void removeCache(String id){

    }

}复制代码

然后在service直接注入调用就行了:

@Autowired
private ProductCache productCache;

/**
 * 查询单个产品
 * @param id
 * @return
 */
public Product findOne(String id){
    //存入缓存
    Product product = productCache.readCache(id);
    //判断查询结果如果为空,就删除缓存,这样就只存了不为空的缓存
    if(product==null){
        productCache.removeCache(id);
    }
    return product;
}复制代码

这里就实现了查询单个产品的缓存功能了。

实现查询全部产品的缓存:

逻辑分析:在项目初始化的时候就应该把所有数据都放进缓存中。

      缓存功能的方法在自己的类里面是没法生效的,所以缓存初始化方法只能放在service中,要让该service实现监听接口,监听在容器初始化完成后触发一个事件,在这个事件里完成缓存数据的方法:

首先  在service实现ApplicationListener(程序监听器)接口泛型ContextRefreshedEvent(容器加载完成事件)类,实现  onApplicationEvent  方法,在该方法里进行缓存操作:

实现接口:


重写onApplicationEvent方法进行缓存操作:

@Autowired
private ProductCache productCache;

/**
 * 容器初始化完成后触发缓存全部商品方法
 * @param event
 */
@Override
public void onApplicationEvent(ContextRefreshedEvent event) {
    List<Product> products = findAll();
    products.forEach(product -> {
        productCache.putCache(product);
    });
}


/**
 * 查询 全部产品  ,
 * @return
 */
public List<Product> findAll(){
    //调用缓存
    return productCache.readAllCache();
}复制代码

缓存功能ProductCache类:

package com.imooc.seller.service;

import com.hazelcast.core.HazelcastInstance;
import com.imooc.api.ProductRpc;
import com.imooc.api.domain.ProductRpcReq;
import com.imooc.entity.Product;
import com.imooc.entity.enums.ProductStatus;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.CachePut;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.stereotype.Component;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;

@Component
public class ProductCache {
    static final String CACHE_NAME = "imooc_product";

    Logger LOG = LoggerFactory.getLogger(ProductCache.class);

    @Autowired
    private ProductRpc productRpc;

    @Autowired
    private HazelcastInstance hazelcastInstance;


    public List<Product> readAllCache(){
        Map map = hazelcastInstance.getMap(CACHE_NAME);
        List<Product> products = null;
        if(map.size()>0){
            products = new ArrayList<>();
            products.addAll(map.values());
        }else  {
            products = findAll();
        }
        return products;
    }

    /**
     * 查询 全部产品
     * @return
     */
    public List<Product> findAll(){


        ProductRpcReq req = new ProductRpcReq();
        List<String> status = new ArrayList<>();
        status.add(ProductStatus.IN_SELL.name());
        req.setStatusList(status);
        LOG.info("rpc查询全部产品,请求:{}",req);
        List<Product> result = productRpc.query(req);
        LOG.info("rpc查询 全部产品,结果:{}",result);
        return result;
    }

    /**
     * 读取缓存
     * @param id
     * @return
     */
    @Cacheable(cacheNames = CACHE_NAME)
    public Product readCache(String id){
        LOG.info("rpc查询单个产品,请求:{}",id);
        Product result = productRpc.findOne(id);
        LOG.info("rpc查询单个产品,结果“{}",result);
        return result;
    }

    /**
     * 更新缓存
     * @param product
     * @return
     */
    @CachePut(cacheNames = CACHE_NAME, key = "#product.id")
    public Product putCache(Product product){
        return product;
    }

    /**
     * 清除缓存
     * @param id
     */
    @CacheEvict(cacheNames = CACHE_NAME)
    public void removeCache(String id){

    }
}
复制代码

 完成后,启动该项目时会把所有商品放入缓存,观察控制台观察日志,会有查询全部商品的日志信息:

2018-09-11 22:15:24.159  INFO 21336 --- [           main] com.imooc.seller.service.ProductCache    : rpc查询全部产品,请求:com.imooc.api.domain.ProductRpcReq@7e15f4d4
2018-09-11 22:15:24.256 DEBUG 21336 --- [           main] c.g.jsonrpc4j.JsonRpcHttpClient          : Request {"id":"744939775","jsonrpc":"2.0","method":"query","params":[{"idList":null,"minRewardRate":null,"maxRewardRate":null,"statusList":["IN_SELL"]}]}
2018-09-11 22:15:24.350 DEBUG 21336 --- [           main] c.g.jsonrpc4j.JsonRpcHttpClient          : JSON-PRC Response: {"jsonrpc":"2.0","id":"744939775","result":[{"id":"001","name":"rpc","status":"IN_SELL","thresholdAmount":1,"stepAmount":0,"lockTerm":0,"rewardRate":3,"memo":null,"createAt":null,"updateAt":null,"createUser":null,"updateUser":null}]}
2018-09-11 22:15:24.391  INFO 21336 --- [           main] com.imooc.seller.service.ProductCache    : rpc查询 全部产品,结果:[Product{id='001', name='rpc', status='IN_SELL', thresholdAmount=1, stepAmount=0, lockTerm=0, rewardRate=3, memo='null', createAt=null, updateAt=null, createUser='null', updateUser='null'}]复制代码

进入hazelcast管理端,有全部数据的缓存数据的话   ,  就成功的完成了缓存全部数据的功能。

至此  SpringBoot整合Hazelcats做为缓存就全部完成。


转载于:https://juejin.im/post/5b953721e51d450e580b0c6d

相关文章:

  • 初识python:格式化输出
  • 算法起步之Kruskal算法
  • 回文自动机学习笔记
  • 深入理解Java类加载器(ClassLoader)
  • @我的前任是个极品 微博分析
  • DOS操作系统
  • Linux基础学习(14)--日志管理
  • 如何查看 Linux 中所有正在运行的服务
  • 两款测试管理工具:TestLink 与飞蛾深度横评
  • 信号导致的问题
  • Java 网页抓取 工具类
  • htmlUnil-2.33 jar包
  • WCF学习总结
  • javascript模拟鸟群使用cax和threejs渲染引擎
  • 18-07-31
  • const let
  • Cumulo 的 ClojureScript 模块已经成型
  • C语言笔记(第一章:C语言编程)
  • JAVA 学习IO流
  • js作用域和this的理解
  • vue脚手架vue-cli
  • Web设计流程优化:网页效果图设计新思路
  • 从setTimeout-setInterval看JS线程
  • 飞驰在Mesos的涡轮引擎上
  • 关于Android中设置闹钟的相对比较完善的解决方案
  • 记录一下第一次使用npm
  • 看图轻松理解数据结构与算法系列(基于数组的栈)
  • 我的面试准备过程--容器(更新中)
  •  一套莫尔斯电报听写、翻译系统
  • # 20155222 2016-2017-2 《Java程序设计》第5周学习总结
  • # Swust 12th acm 邀请赛# [ E ] 01 String [题解]
  • # 飞书APP集成平台-数字化落地
  • #define用法
  • #Lua:Lua调用C++生成的DLL库
  • #传输# #传输数据判断#
  • $(function(){})与(function($){....})(jQuery)的区别
  • (1)虚拟机的安装与使用,linux系统安装
  • (3)llvm ir转换过程
  • (HAL)STM32F103C6T8——软件模拟I2C驱动0.96寸OLED屏幕
  • (Matalb时序预测)PSO-BP粒子群算法优化BP神经网络的多维时序回归预测
  • (ZT) 理解系统底层的概念是多么重要(by趋势科技邹飞)
  • (二开)Flink 修改源码拓展 SQL 语法
  • (附源码)计算机毕业设计SSM基于健身房管理系统
  • (机器学习-深度学习快速入门)第三章机器学习-第二节:机器学习模型之线性回归
  • (四)库存超卖案例实战——优化redis分布式锁
  • (一)Neo4j下载安装以及初次使用
  • (轉貼)《OOD启思录》:61条面向对象设计的经验原则 (OO)
  • 、写入Shellcode到注册表上线
  • ./configure,make,make install的作用(转)
  • .bat批处理出现中文乱码的情况
  • .gitignore文件设置了忽略但不生效
  • .net 4.0 A potentially dangerous Request.Form value was detected from the client 的解决方案
  • .NET Core中的去虚
  • .Net MVC + EF搭建学生管理系统
  • .NET MVC 验证码