"Closed door practice", thoroughly understand this "Java core knowledge", and don't panic in job hopping interview

Si Teng 2022-05-22 12:44:25 阅读数:684


In front of the epidemic ,“ Shut up and practice ” Are you worried about the job hopping season ? If you can make it up today, it will be said 30 Chapter by chapter Java Arrangement of core interview knowledge ( Include :VM,JAVA aggregate ,JAVA Multithreaded concurrency ,JAVA Basics ,Spring principle , Microservices ,Netty And RPC, The Internet , journal ,Zookeeper,Kafka,RabbitMQ,Hbase,MongoDB,Cassandra, Design patterns , Load balancing , database , Consistent Hashing ,JAVA Algorithm , data structure , encryption algorithm , Distributed cache ,Hadoop,Spark,Storm,YARN, machine learning , Cloud computing ), Have a good digestion , Let you interview job hopping no longer panic !!

  1. Threads
  2. JVM Memory area
  3. JVM Run time memory
  4. Garbage collection and algorithm
  5. JAVA Four types of references
  6. GC Generational collection algorithm VS Partition collection algorithm
  7. GC Garbage collector
  9. JVM Class loader

03 JAVA aggregate

  1. Interface inheritance and Implementation
  2. LIST
  3. SET
  4. MAP

04 JAVA Multithreaded concurrency

  1. JAVA Concurrent knowledge base
  2. JAVA Thread realize / How it was created
  3. 4 Thread pool
  4. Thread life cycle ( state )
  5. Thread termination 4 Ways of planting
  6. sleep And wait difference
  7. start And run difference
  8. JAVA Background thread
  9. JAVA lock
  10. Thread basic method
  11. Thread context switch
  12. Synchronization lock and deadlock
  13. Thread pool principle
  14. JAVA Blocking queue principle
  15. CyclicBarrier、CountDownLatch、Semaphore Methods
  16. voliate The role of keywords ( Variable visibility 、 No reordering )
  17. How to share data between two threads
  18. Threadlocal effect ( Thread local storage )
  19. synchronized and ReentrantLock The difference between
  20. ConcurrentHashMap Concurrent
  21. Java Thread scheduling used in
  22. Process scheduling algorithm
  23. What is? CAS( Compare and exchange - Optimistic locking mechanism - Lock spin )
  24. What is? AQS( Abstract queue synchronizer )

05 JAVA Basics

  1. JAVA Exception classification and handling
  2. JAVA Reflection
  3. JAVA annotation
  4. JAVA Inner class
  5. JAVA Generic
  6. JAVA serialize ( Create reusable Java object )
  7. JAVA Copy

06 Spring principle

It is a comprehensive 、 Enterprise application development one-stop solution , Through the presentation layer 、 The business layer 、 Persistence layer . however Spring Can still integrate seamlessly with other frameworks

  1. Spring characteristic
  2. Spring Core components
  3. Spring Common modules
  4. Spring Main package
  5. Spring Commonly used annotations
  6. Spring Third party combination
  7. Spring IOC principle
  8. Spring APO principle
  9. Spring MVC principle
  10. Spring Boot principle
  11. JPA principle
  12. Mybatis cache
  13. Tomcat framework

07  Microservices

  1. Service registration found
  2. API gateway
  3. Configuration center
  4. Event scheduling (kafka)
  5. Service tracking (starter-sleuth)
  6. Service failure (Hystrix)
  7. API management

08 Netty And RPC

Netty It's a high performance 、 Asynchronous event driven NIO frame , be based on JAVA NIO Provided API Realization . It provides the right TCP、UDP And file transfer support , As an asynchronous NIO frame ,Netty All of the IO Operations are asynchronous and non blocking , adopt Future-Listener Mechanism , Users can easily obtain the information by active or notification mechanism IO Operating results .

  1. Netty principle
  2. Netty High performance
  3. Netty RPC Realization
  4. RMI Realization way
  5. Protocol Buffer
  6. Thrift

09 The Internet

  1. The Internet 7 Layer architecture
  2. TCP/IP principle
  3. TCP Three handshakes / Four waves
  4. HTTP principle
  5. CDN principle

10 journal

  1. Slf4j
  2. Log4j
  3. logBack
  4. ELK

11 Zookeeper

Zookeeper It's a distributed coordination service , Can be used for service discovery , Distributed lock , Distributed leadership elections , Configuration management, etc .Zookeeper Provides a similar to Linux The tree structure of the file system ( Think of it as a lightweight memory file system , But it's only suitable for storing a small amount of information , Not suitable for storing a large number of files or large files at all ), At the same time, it provides the monitoring and notification mechanism for each node .

  1. Zookeeper Concept
  2. Zookeeper role
  3. Zookeeper working principle ( Atomic broadcasting )
  4. Zonde There are four forms of directory nodes

12 Kafka

Kafka It's a high throughput 、 Distributed 、 Based on the release / Subscribed message system , By the first LinkedIn Companies to develop , Use Scala Language writing , At present, it is Apache Open source projects for .

  1. Kafka Concept
  2. Kafka Data storage design
  3. Producer design
  4. Consumer Design

13 RabbitMQ

RabbitMQ It's a by Erlang Language development AMQP Open source implementation .AMQP :Advanced Message Queue, Advanced message queue protocol . It is an open standard of application layer protocol , Designed for message-oriented middleware , The client and message middleware based on this protocol can deliver messages , Non-subject product 、 Restrictions on conditions such as development language .RabbitMQ It originated in the financial system , Used to store forward messages in a distributed system , In the ease of use 、 Extensibility 、 Things like high availability are doing well

  1. Concept
  2. RabbitMQ framework
  3. Exchange type

14 Hbase

base Is distributed 、 Column oriented open source database ( Actually, it's column oriented family ).HDFS by Hbase Provide reliable underlying data storage services ,MapReduce by Hbase Provides high performance computing power ,Zookeeper by Hbase Providing stable services and Failover Mechanism , So we say Hbase It is a distributed database solution to solve the high-speed storage and reading of massive data through a large number of cheap machines .

  1. Concept
  2. The column type storage
  3. Hbase The core concept
  4. Hbase Core architecture
  5. Hbase Writing logic
  6. Hbase VS Cassandra

15 MongoDB

MongoDB By C++ language-written , It is an open source database system based on distributed file storage . Under high load , Add more nodes , Server performance can be guaranteed .MongoDB For the purpose of WEB Applications provide scalable, high-performance data storage solutions .

MongoDB Store data as a document , Data structures are defined by key values (key=>value) The composition of .MongoDB The document is similar to JSON object . Field values can contain other documents , Arrays and document arrays .

  1. Concept
  2. characteristic

16 Cassandra

Apache Cassandra It's highly scalable , High performance distributed NoSQL database . Cassandra Designed to handle large amounts of data on many commodity servers , Provide high availability without worrying about a single point of failure .Cassandra With a distributed architecture that can handle large amounts of data . Data is placed on different machines with multiple replication factors , For high availability , Instead of worrying about a single point of failure .

  1. Concept
  2. Data model
  3. Cassandra Agreement hash And virtual nodes
  4. Gossip agreement
  5. Data replication
  6. Data write request and coordinator
  7. Data read request and background repair
  8. data storage (Commitlog、MemTable、SSTable)
  9. Secondary indexes
  10. Data reading and writing

17 Design patterns

  1. Design principles
  2. Factory method model
  3. Abstract factory pattern
  4. The singleton pattern
  5. Builder pattern
  6. Archetypal model
  7. Adapter pattern
  8. Decorator mode
  9. The proxy pattern
  10. Appearance mode
  11. Bridging mode
  12. Portfolio model
  13. The flyweight pattern
  14. The strategy pattern
  15. Template method pattern
  16. Observer mode
  17. Iterative subpattern
  18. The chain of responsibility model
  19. Command mode
  20. Memo mode
  21. The state pattern
  22. Visitor mode
  23. Intermediary model
  24. Interpreter mode

18 Load balancing

Load balancing Build on the existing network structure , It provides a cheap, effective and transparent way to extend Network devices and The server The bandwidth of the 、 increase throughput 、 Strengthening network data processing capacity 、 Improved network flexibility and availability .

  1. Four layer load balancing VS Seven layer load balancing
  2. Load balancing algorithm / Strategy
  3. LVS
  4. Keepalive
  5. Nginx Reverse agent load balancing
  6. HAProxy

19 database

  1. Storage engine
  2. Indexes
  3. Three paradigms of database
  4. Database transactions
  5. stored procedure ( given SQL Statements set )
  6. trigger
  7. Database concurrency strategy
  8. The database lock
  9. be based on Redis Distributed lock
  10. Partition table
  11. Two phase submission agreement
  12. Three stage submission agreement
  13. Flexible business
  14. CPA

20 Consistent Hashing

  1. Paxos
  2. Zab
  3. Raft
  4. NWR
  5. Gossip
  6. Uniformity hash

21 JAVA Algorithm

  1. Two points search
  2. Bubble sort algorithm
  3. Insertion sort algorithm
  4. Fast sorting algorithm
  5. Hill sort algorithm
  6. Merge sort algorithm
  7. Bucket sorting algorithm
  8. Cardinality sorting algorithm
  9. Pruning algorithm
  10. Backtracking algorithm
  11. Shortest path algorithm
  12. Maximum subarray algorithm
  13. The longest common suborder algorithm
  14. Minimum spanning tree algorithm

22 data structure

  1. Stack
  2. queue
  3. Linked list
  4. Hash table
  5. Sort binary trees
  6. Red and black trees
  7. B-Tree
  8. Bitmap

23 encryption algorithm

  1. AES
  2. RSA
  3. CRC
  4. MD5

24 Distributed cache

  1. Cache avalanche
  2. Cache penetration
  3. Cache preheating
  4. Cache update
  5. Cache degradation

25 Hadoop

It's a big data solution . It provides a distributed system infrastructure . The core content includes hdfs and mapreduce.hadoop2.0 Later introduce yarn.

hdfs It's for data storage ,mapreduce It is convenient for data calculation .

1. hdfs It also corresponds to namenode and datanode. namenode Responsible for saving the basic information of metadata ,datanode Directly store the data itself ;

2. mapreduce Corresponding jobtracker and tasktracker. jobtracker Responsible for distributing tasks ,tasktracker Responsible for carrying out specific tasks ;

3. Corresponding to master/slave framework ,namenode and jobtracker It should correspond to master, datanode and tasktracker It should correspond to slave.

  1. Concept
  2. HDFS
  3. MapReduce
  4. Hadoop MaReduce The life cycle of the assignment

26 Spark

Spark Provides a comprehensive 、 A unified framework is used to manage a variety of different natures ( Text data 、 Chart data, etc ) Data sets and data sources ( Bulk data or real-time streaming data ) The need for big data processing .

  1. Concept
  2. Core architecture
  3. Core components
  4. SPARK Programming model
  5. SPARK Calculation model
  6. SPARK Operation process
  8. SPARK RDD technological process

27 Storm

Storm Is a free and open source distributed real-time computing system . utilize Storm It's easy to handle unlimited data streams reliably , image Hadoop Batch processing of big data is the same ,Storm It can process data in real time .

  1. Concept
  2. Cluster architecture
  3. Programming model
  4. Toplogy function
  5. Strom Streaming GroupingStorm


YARN It's a resource management 、 The framework of task scheduling , It mainly includes three modules :ResourceManager(RM)、NodeManager(NM)、ApplicationMaster(AM). among ,ResourceManager Responsible for the monitoring of all resources 、 Distribute and manage ;ApplicationMaster Responsible for the scheduling and coordination of each specific application ;NodeManager Responsible for the maintenance of each node . For all applications,RM Have absolute control and allocation of resources . And each AM And RM Negotiate resources , At the same time with NodeManager Communication to execute and monitor task.

  1. Concept
  2. ResourceMananger
  3. NodeMananger
  4. ApplicationMaster
  5. YARN Operation process

29 machine learning

  1. Decision tree
  2. Random forest algorithm
  3. Logical regression
  4. SVM
  5. Naive Bayes
  6. K Nearest neighbor algorithm
  7. K Mean algorithm
  8. Adaboost Algorithm
  9. neural network
  10. Markov

30 Cloud computing

  1. SaaS
  2. PaaS
  3. IaaS
  4. Docker
  5. OpenStack

Last but not least , Don't panic if you want to change your job interview , Then stay at home “ Shut up and practice ”, Leak filling !

copyright:author[Si Teng],Please bring the original link to reprint, thank you. https://en.javamana.com/2022/142/202205211836107086.html