Const指针与类

类的成员函数在调用的时候,会有一个隐式的this指针,默认情况下,这个指针的类型是 ClassName * const this,这意味着this指针不能改变,但this指针指向的对象是可以改变的。

但如果成员函数不会改变对象本身,那么this指针应该被设计为const ClassName * const this,即this指针是一个指向常量的常量指针,其指向的对象不可变更,本身也不可变更。

但因为this是隐式的指针,那么当我们想要声明this指针指向的对象是常量时,我们在哪里声明呢?

就在成员函数的参数后面,加一个const。这就叫常量成员函数。

注意,常量的对象无法调用自身的非常量成员函数,原因就在于,非常量成员函数的this指针是:ClassName * const this,无法指向一个const的对象。

堆和栈的速度

今天被问到一个问题,堆和栈的速度比较。

当时我并没有仔细了解过堆栈的速度比较,之前也没看过相关的博客技术,所以并不知道标准答案,但我还是猜想出来了。

堆和栈最后都是在物理内存DDRM上的,对于上层的应用来说,由于页表的机制,是看不到物理内存的,而且无论是堆还是栈,都是被打散分布在DDRAM上的,所以,从物理上来说,堆和栈应该没有差异。但是既然有堆和栈的速度比较,那肯定是有差异的,那么差异不来自于物理的DDRM,那就只可能来自于Cache,一定是栈的cache的命中率比较高,导致了这样一个速度的差异。

以上内容是面试的时候推导出来的,面试后我又查了相关的知识,发现确实有这样的原因。

  • On the stack, temporal allocation locality (allocations made close together in time) implies spatial locality (storage that is close together in space). In turn, when temporal allocation locality implies temporal access locality (objects allocated together are accessed together), the sequential stack storage tends to perform better with respect to CPU caches and operating system paging systems.

  • Memory density on the stack tends to be higher than on the heap because of the reference type overhead (discussed later in this chapter). Higher memory density often leads to better performance, e.g., because more objects fit in the CPU cache.

  • Thread stacks tend to be fairly small – the default maximum stack size on Windows is 1MB, and most threads tend to actually use only a few stack pages. On modern systems, the stacks of all application threads can fit into the CPU cache, making typical stack object access extremely fast. (Entire heaps, on the other hand, rarely fit into CPU caches.)
    《Pro .NET Performance》

总体上来说:

  1. 栈内存的分配具有空间上的局部性(分配时间的局部性),而我们知道cache设计的基础理论就是程序的局部性,因此这是利于cache的
  2. 栈的内存密度比堆更高,这是利于cache的
  3. 线程栈往往比较小,可以完全cache到缓冲里面
  4. 此外,申请堆内存需要malloc,需要经历一些内存空间的寻找,还可能会引发系统调用,而栈内存是自动分配的,在内存的分配速度上,栈就比堆快

ESP32+Arduino的smartconfig配网方式,以及配套APP

主要原理采用了WIFI UDP发送广播的方式
由于Arduino提供的驱动包实在太方便了,这里直接用了,可以去看一下库源码,源码的继承关系还是比较多了,一个WIFI类继承了多个不同的父类,其中ESP8266WiFiSTAClass类提供了在STA模式下的各种方法,这里面就有

        bool beginWPSConfig(void);
        bool beginSmartConfig();
        bool stopSmartConfig();
        bool smartConfigDone();四个方法,WPS需要路由器支持,其他三个方法看名字就知道用途
demo代码如下:
 
char ssid[30] = STASSID;
char password[30] = STAPSK;
//以下函数在setup中调用
void smartConfig()
{
  WiFi.mode(WIFI_STA);//切换到STA模式
  Serial.println(“\r\nWait for Smartconfig\r\n”);
  WiFi.beginSmartConfig();//开始smartconfig
  while (1)
  {
    Serial.print(“Waiting …\r\n”);
    if (WiFi.smartConfigDone())
    {
      Serial.println(“SmartConfig Success!!!”);
      Serial.printf(“SSID:%s\r\n”, WiFi.SSID().c_str());
      Serial.printf(“PSW:%s\r\n”, WiFi.psk().c_str());
      memset(ssid,0,sizeof(ssid));
      memset(password,0,sizeof(password));
      sprintf(ssid,”%s”,WiFi.SSID().c_str());
      sprintf(password,”%s”,WiFi.psk().c_str());        //设置WIFI账号和密码
      WiFi.setAutoConnect(true);                        //设置自动连接
      break;
    }
    delay(1000);
  }
}
 
 
昨天找了半天的配网工具,其实乐鑫官方就提供了一个很好用的esp-touch
需要先把手机连接到wifi上,然后通过UDP广播,ESP32才可以收到数据包
 
网上各种配网软件乱七八糟的,这个软件在很多手机的应用商店没有上架,所以还是要去乐鑫的官网下载。

Const指针与类

类的成员函数在调用的时候,会有一个隐式的this指针,默认情况下,这个指针的类型是 ClassName * const this,这意味着this指针不能改变,但this指针指向的对象是可以改变的。

但如果成员函数不会改变对象本身,那么this指针应该被设计为const ClassName * const this,即this指针是一个指向常量的常量指针,其指向的对象不可变更,本身也不可变更。

但因为this是隐式的指针,那么当我们想要声明this指针指向的对象是常量时,我们在哪里声明呢?

就在成员函数的参数后面,加一个const。这就叫常量成员函数。

注意,常量的对象无法调用自身的非常量成员函数,原因就在于,非常量成员函数的this指针是:ClassName * const this,无法指向一个const的对象。

CSAPP Virtual Memory

为了锻炼英语水平,这篇笔记用英文记录

what is virtual memory? why we need it?Today, I want to talk about it and record somethings in this blog for comprehending batter.

First and foremost,What is Virtual Memory?

What is Virtual Memory(Basic Concept Of Virtual Memory)

It is generally know that most basic operation of modern computer only consist of getting command and executing. Memory just a liner space store the code and data. When we use the computer and run a program in the machine. Some code and data of the program is moved to memory form disk. If the address of the code line is fixed, We must to move the code and data to a fixed space of memory. It is terrible,no doubt.

In the model Operating System, we can run a lot of program, and the memory of every program is individual. The processes run on machine have a Virtual Space and the Virtual Space seems infinite and exclusive. This is what the Virtual memory do in model Operating system.

So now, We know that Virtual Memory is the tools in Operating System to help we to run multiple program.

Physical and Virtual Addressing

Physical Addressing

The main memory is organized as an array of M contiguous byte-size cells. Each cell has a unique address known as Physical Address. The most natural way for a CPU to access memory would be to use Physical Addressing

Virtual Addressing

With virtual addressing, the CPU accesses memory by virtual address(VA), which is converted to the physical address before being sent to main memory (address translation) .

Address translation is a collaborative work which is done by the Hardware and Operating System.

The hardware called the memory management unit(MMU). MMU lookup table stored in main memory whose contents are managed by operating System.

Why we need it.()

In general, we need the Virtual memory for three reasons:

  1. make the main memory efficient by treating it as cache for an address space stored on disk.(Caching)

  2. It simplifies memory management by providing each process with a uniform address space.(Management)

  3. It protects the address space of each process from corruption by other processes.(Protection)

#

VM as Tool for Caching

Partition

At first, we think of memory as a kind of cache between disk and cpu. The CPU can run by get command and data from disk, but the access time of disk is very large. We need the DRAM to cache the code and data.

We partition the data as the transfer units between the disk and main memory.

The transfer units known as Virtual Pages(VPs) in Virtual Memory.(Virtual Memory stored in disk)

The transfer units known as Physical pages(PPs) or Page frames.(Caching in Main memory)

This concept is similar as SRAM Caching. But the DRAM Caching need bigger Pages is transfer units because the First Access Time of disk is so large. The cost of reading the first byte from disk sector is about 100000 times slower than reading successive bytes in sector.

Page Table

When we access an address of Virtual Memory, we want to know if it cached. If the answer is yes, we need to get the Physical address of the Virtual address and access DRAM-CACHE to get the data. So we need a Table(Page Table) to record the information about if the VM is cached and the corresponding address(Physical Address).

The structure of page table like this.

When we access the address X(also in the X-th page), we just to find the X-th Item of the table and check the information to decide next step.

If the valid is 1, the machine can get the PA and access the DRAM-CACHE.

If the valid is 0, the machine will run a Page Fault handler in System kernel, which selects a victim page. If the victim has been modified, then the kernel copies it back to disk. In either case, the kernel modifies the page table Entity for the victim page to reflect the fact that victim is no longer cached in DRAM-CACHE. Next, kernel copies the VP we accessing to DRAM-CACHE and update the Page Table.(If the Cache Space is enough, we don’t need to select the victim)

VM as Tool for Memory Management

Separate Virtual Space for every process has a profound impact on the Memory Management. VM simplifies linking and loading, the sharing of code and data, and allocating memory to applications.

  • Simplifying Linking

    A separate and virtual address space allow each process to use the same basic format for its memory image, regardless of where the code and data actually reside in physical memory.

  • Simplifying Loading

    Linux Loader just allocates virtual pages for the code and data segments. Then Loader points the Address of Page Table for the process.

    Loader never actually copies any data from disk into memory cache. The data are paged in automatically and on demand by the virtual memory system the first time each page is referenced.

  • Simplifying Sharing

  • Simplifying Allocating

When a program running in a user process requests additional heap space, operating system just allocates some virtual page for process. And operating system don’t need to decide where the page should be cached in Memory.

VM as Tool for Memory Protection

It’s easy to prohibit that process access private data of another process for the separate address space of VM system.

Program accessing of memory must via the address translation, so we just need to add some flag into page Table Entity that the protection system could be built up.

Address Translation

There are the details about address translation

This figure show How the address translation work.

VPO is identical to the VPO, both they are compose of p bit . When we access a VP, CPU find the correct Page Table Entity and the Page Table Start Address is stored in PTBR(A register in CPU).

If the Valid is 1, that means page hit.

  1. Processor access a Virtual Address and The VP sends to MMU

  2. MMU requests the correct PTE from the Main Memory/Cache

  3. The cache/Memory return PTE

  4. MMU get correct Physical Address and sends it to the cache/Memory

  5. cache/Memory return data.

If the Vaild is 0.

  1. -3. same as above

  2. The MMU triggers an exception(Page Fault Exception)

  3. The fault handler identifies a victim page in Physical Memory,and if the page has been modified, pages it out to disk.

  4. The handler page in the new page and update Page Table.

  5. The handler returns to the original process and re-execute the instruction.

Cache the Page Table Entity (TLB)

Each time when we access the memory looking for data or some other things, we must access Physical Memory two times(for Getting PTE and Getting Data).So someone proposes a idea that we can cache the PTE into a faster place then we would get faster when transferring address. This is the origin of TLB.

 

Save Memory(Multi-Leave Page Tables)

if we had a 32-bit address space, 4 KB pages, and a 4-byte PTE, then we would need a 4 MB page table resident in memory at all times, even if the application referenced only a small chunk of the virtual address space.

The multi-Leave Page Table solve the problem.

It’s easy to understand that we just create the page table when we use the corresponding address. So we can save a lot of space but the structure is more complex. And we must to access k PTEs. It seems expecsive but TLB come to the rescue here by caching PETs.

Case : Intel Core I7/Linux Memory System.

CSAPP-12 Concurrent Programming 并发编程 读书笔记

CSAPP Concurrent Programming

综述

并发编程大概有3种不同的实现方法。

  • 进程

  • IO多路复用

  • 线程

这三种方法各有优缺点。 对于进程来说,进程拥有独立的地址空间,但是进程之间通信需要用IPC,较为不便 IO多路复用其实只有一个进程,但是在这一个进程里面,拥有一个状态机,根据不同的IO到达的状态,执行不同的逻辑,从而实现并发。IO多路复用里面的控制流不会被系统调度,而且自身根据状态调度。 线程结合了IO多路复用和进程的特点,一个进程可以包含多个线程,这多个线程共享同一个地址空间,这方便了线程之间的通信,同时线程也可以像进程一样被系统调度。

关于接下来的讨论

为什么要实现并发呢?我们考虑这样一个例子:一个HTTP服务器,当客户端连接到HTTP服务器后,客户端发送一些数据,服务器返回对应的信息。

如果没有并发,那么服务器只能实时的“照顾一位顾客”,当有第二位顾客到来,服务器边只能暂时搁置第二位顾客。

这显然是不能接受的,所以我们要实现“并发”,让服务器可以在一段时间内,同时照顾着多个顾客。注意这里我们用的是一段时间来描述,因为并发不要求在一个时刻同时服务多个顾客,而只要在一段时间内同时照顾到这多个顾客即可。对应的,如果使用一个时刻这个词的话,我们的描述应该是“并行”,也就是 parallel ,对应的可能有多个服务员,来实现并行。

为了实现并行,大致有上面刚刚说到的3个方案。下面就来详细的探讨一下

使用进程实现并发

进程是大家都相当熟悉的,在linux下面使用fork创建一个进程,之后系统就会对不同的进程进行调度。这是内核自动完成的,不需要我们手动的干预,每个进程就像是一个虚拟的小人一样,只不过这个小人一会儿运动,一会儿不运动。但是只要运动的时间间隔够小,你就感受不到他是“虚拟的”。

对于每个进程来说,他们都有着自己独立的虚拟内存空间。这既是优点也是缺点:

  • 优点在于

    • 独立的地址空间,可以消除不同进程之间的奇奇怪怪的冲突。

  • 缺点在于

    • 需要使用IPC进行进程间通信,这通常来说是比较慢速的。

同时,进程的调度消耗也是比较大的。

示例代码如下:

注意第33行,他在主进程里面关掉了connfd这个文件描述符,这主要是因为子进程和父进程之间共享同一个文件描述表。所以父进程和子进程都需要关闭这个文件描述符,以防止内存泄露,关于这点需要详细的了解一下系统IO。

使用IO多路复用实现并发

我们来考虑一下进程的缺点。

首先,进程很占资源,无论是进程之间的调度,还是进程占用的内存,对系统都是一笔不小的开销。当客户太多之后,如果每个客户都要一个服务员的“影分身”,那么服务员很可能就先挂了,他不能分出太多的分身。那么有什么办法可以改善这个情况吗?

考虑一下,服务员的每个影分身都在干什么:当一个客户到达餐馆后,客户并不是时时刻刻都需要和服务员交流。也就是说,大部分时间,我们的“影分身”都在等待客户,这就是IO阻塞

基于以上特点,我们能不能用一个服务员就满足多个客户的需求呢?当然是可以的。

当每个客户进入饭店后,服务员都安排他们坐下,当客户点好餐后,客户举手,这时候服务员就会看到客户,然后到他跟前处理一下,接着看下面有没有举手的,如果有新的举手的,就继续处理。

这其实是一种事件驱动模型,客户举手或者新客户到达,都是一个“事件”。

事实上,现在很多的高性能服务器,例如NodeJs Ng