Ranking
input : recipe, user
Output: score (recommendation value) (classification -> soft label [0.2,0.1,0.4,0.1,0.1]) (regression)
Generator
Input: user
Output: recipe for this user
PAC learning try to study the generalization ability for a learning model (given a set of training data, its ability to carry over to new test data)
I recommend for reading this blog post (by the original author)
https://jeremykun.com/2014/01/02/probably-approximately-correct-a-formal-theory-of-learning/
Approximately correct means the error will be small on new samples,
Probably means that if we play the game over and over we’ll usually be able to get a good approximation.
Error of hypothesis $h$ (with respect to the concept (our target, the “golden” solution) $c$ and the distribution $D$ ) is the probability over $X$ drawn via $D$ that $h(X)$ and $c(X)$ differs.
concept class: a collection of concepts (functions) over $X$
PAC Learnable: A problem is PAC-learnable if there is an algorithm $A$ which will likely generate a hypothesis whose error is small for all distributions $D$ over $X$.
More formal one: small error -> error probablity = $\epsilon \in [0,0.5]$; high probability -> $1 - \delta$ : $\delta \in [0,0.5]$. Then, we limit the algorithm to run in time and space polynomial in $1/\epsilon $ and $1/\delta$
Let be a set, and be a concept class over . We say that is PAC-learnable if there is an algorithm $A(\varepsilon, \delta)$ with access to a query function for and runtime $O(\textup{poly}(\frac{1}{\varepsilon}, \frac{1}{\delta}))$, such that for all , all distributions over , and all inputs $\varepsilon, \delta$ between 0 and , the probability that produces a hypothesis with error at most is at least . In symbols:
$\displaystyle \textup{P}{D}(\textup{P}{x \sim D}(h(x) \neq c(x)) \leq \varepsilon) \geq 1-\delta$
big idea:
The target is an interval $[a_0,b_0]$, each query (unit time cost) will randomly pick up a value $x \in [a_0,b_0]$.
We propose A: pick up $m$ samples, (therefore $O(m)$ time), and pick up the smallest one $a_1$ and the largest one $b_1$ as the predicted interval (hypothesis) $[a_1,b_1]$
We should consider $P(h(x)\neq c(x))$ (this one must be a formulation related with $m$, say $g(m)$), that is, show that if $P_D(g(m) \leq \epsilon) \geq 1- \delta$ , then $m \propto 1/\epsilon, 1/\delta$
$P(h(x)\neq c(x)) = g(m) = \textup{P}{x \sim D}(x \in A) + \textup{P}{x \sim D}(x \in B)$, where $A = [a_0,a_1], B = [b_1,b_0]$
then, by symmetry, if we can prove that $\textup{P}_{x \sim D}(x \in A) \leq 2/\epsilon$ is very probabilitic ($\geq 1-\delta$), then, we are done.
but, how to find the correlation between $\epsilon$ and $m$? The solution here is to define a interval $A’$, where $A’ = [a_0,a’]$ and $P(x\in A’) = 2/\epsilon$.
if $a’ \geq a_1$, we already finish the proof.
Othersiwe, the probability that the sample does not belong to $A$ (remeber that we are considering $\textup{P}{x \sim D}(x \in A) \leq 2/\epsilon$) is smaller than $1- 2/\epsilon$, i.e., $\textup{P}{x \sim D}(x \ not \in A) \leq 1 - 2/\epsilon$
then the probability that all $m$ samples does not locate in $A$ Is: $\leq (1 - 2/\epsilon)^m$
Note that this probability equals to the probability that the error ($h(x) \neq c(x)$) due to the $A$ region is larger than $2/\epsilon$ (the probability that our chosen contributes error greater than )
Such a proof holds for B region. Since A and B are not intersected, we can conclude that
The probabity that the error is larger than $\epsilon$ is less than $2((1 - 2/\epsilon)^m)$
it means, The probabity that the error is less or equal than $\epsilon$ is larger than $1 - 2((1 - 2/\epsilon)^m)$
We want the probability of a small error is larger than $1-\delta$, that is:
$\displaystyle 2(1 - \varepsilon / 2)^m \leq \delta$
therefore, $m \geq (2 / \varepsilon \log (2 / \delta))$
(which is polynomial in $1/\epsilon, 1/\delta$)Therefore, we finish the proof
The theorem is:
Let H be a finite hypothesis space. Let D be an arbitrary, fixed unknown probability distribution over X and let c ∗ be an arbitrary unknown target function. For any , δ > 0, if we draw a sample from D of size ![Screen Shot 2022-04-01 at 10.36.58 PM](/figures/Screen Shot 2022-04-01 at 10.36.58 PM.png)
then with probability at least 1 − δ, all hypotheses/concepts in H with error ≥ $\epsilon$ are inconsistent with the sampled data
(with probability at least 1−δ any hypothesis consistent with the sampled data will have error at most $\epsilon$)
Proof:
In Non-realizable case (more general case), there may not be a “golden solution”, we can only compete with the best function (the function of smallest true error rate) in some concept class H. A natural hope is that picking a concept c with a small observed error rate gives us small true error rate. It is therefore useful to find a relationship between observed error rate for a sample and the true error rate.
Consider a hypothesis with true error rate p (for example, a coin of bias p) observed on $m$ examples (the coin is flipped m times). Let S be the number of observed errors (the number of heads seen) so S/m is the observed error rate.
Hoeffding bounds state that for any $\epsilon$ ∈ [0, 1]
![Screen Shot 2022-04-01 at 11.22.38 PM](/figures/Screen Shot 2022-04-01 at 11.22.38 PM.png)
Let H be a finite hypothesis space. Let D be an arbitrary, fixed unknown probability distribution over X and let c ∗ be an arbitrary unknown target function. For any $\epsilon$, δ > 0, if we draw a sample S from D of size
then probability at least (1 − δ), all hypotheses h in H have
Proof: given a hypothesis h, by hoeffding boundinhg, we get that the probability that its observed error outside $\epsilon$ of its true error is at most
![Screen Shot 2022-04-02 at 3.54.11 PM](/figures/Screen Shot 2022-04-02 at 3.54.11 PM.png)
(based on hoeffding boudning, and the definition of $m$)
By union bound over all all h in H, we then get the desired result.
union bound is too loose! There may be many overlap between two hypothesises. That is, two hypothesises may label most data points the same.
The growth function of $H$ $g_H(m)$ is the maximum number of distinct labelling $H$ (a labelling for a $h \in H$ is just the results induced by $h$ for the given samples) can induce on any set $S$ of $m$ data points. (对于m个sample,最多能产生多少种label)
we say $H$ shatters(打碎) $S$ if $|H(S)| = 2^m$ , we assume this is a binary classification problem, i.e., the label is in {-1,1} (有这样一个m个sample,给任意一种label,都存在一个h可以产生这个label)
$g_H(m) = 2^m$ if exist S that H shatters it.
![Screen Shot 2022-04-02 at 11.54.20 PM](/figures/Screen Shot 2022-04-02 at 11.54.20 PM.png)
2D linear seperator cannot shatter 4 points
D dimentional linear seperator can shatter D+1 points but cannot shatter D + 2 points
If no golden target:
$d_{VC}(H) = $ the largest value of $m$ s.t. $g_H(m) = 2^m$, i.e., the greatest number of data points that can be shattered by $H$
For example, for a N-dimensional linear seperator, its VC dimension is $D+1$
Agnostic case: trainning error is not zero
traning error decreases as VC dim increases (easier to )
the bound increases as VC dim increases (harder to generalize)
Two analysis illustration (VC analysis and bias-variance)
Now, all (at least from my knowledge) GNN-based works focus on attacking, that is, given a locked circuit, using GNNs to attack it (predict the correct key values).
These works can be roughly divided into two categories based on the target locking method.
To attack some logic locking methods, the problem can be formulated as a link prediction task. For example, for the MUX-based locking method, what the attacker is doing is actually trying to determine which edge in the MUX should be preserved.
Here are some papers (although most of them come from the same author)
ICCAD21 - UNTANGLE: Unlocking Routing and Logic Obfuscation Using Graph Neural Networks-based Link Prediction
DATE22 - MuxLink: Circumventing Learning-Resilient MUX-Locking Using Graph Neural Network-based Link Prediction
A cheatsheet for some common operations I may need to ask Google again and again…
Basic idea is to use conditions to find all true index, and select the row/column based on index
For example, return rows whose column 1 value is exactly 2:
1 | a[:,a[:, 0] == 2] |
Another example is (selct rows whose column values are all larger than 0)
1 | a[(a > 0).all(1)] |
2021年又是在疫情中度过的一年。因为疫情,自己的大部分生活锁死在了宿舍。即便这样,2021也是我人生最“动荡”的一年。我就好像是一片充满补丁仍昂扬要追求大海的帆船,即便是狂风暴雨,但终究还是过完了这一年:虽然过去的旅程其实只是被风暴推着走的。
2021年是没有成果的一年:paper基本没有进展、学习上也没有学到什么新鲜有意思的东西、即便结婚却时常还是有孤独感。
不过也是很幸运的一年。其实不止是2021,感觉自己一直是比较幸运的人。爸妈给的脑子、大学碰到的同学、913的小伙伴、来美国后的labmate和特别友好的两位导师。我真的挺庆幸自己能活在他们的庇护下,一直都不需要为了生活和琐碎的人际关系发愁。
风格好像有点太阴沉了,积极点说吧!2021年我做到了这些:
然后是今年的计划(雄心勃勃
看盗月社的时候,总会有种暖意。之前不知道为什么,后面明白了:是因为他们本身是温暖的、或者说,他们的视频、故事都是积极阳光的、他们就像太阳一样,会主动温暖身边的人。我想,这也是为什么他们的视频里人们也永远是那么热情友好吧。
是呀,我们也应该做一个这样温暖的人吧。人生其实也只有几十年,赚钱意义不是很大、改变造福世界又不太可能、做一个温暖的人,让自己和周围的人被阳光滋润着似乎是个挺靠谱的人生选择。主动表达自己的感激、永远乐观积极,给周围的人带去温暖而不是阴沉。希望我可以吧~
不过说起来,我其实不知道盗月社的大家是怎么能一直保持阳光的。也许他们背后也会脆弱也需要别人的温暖吧?反正我是这样的。虽然很早就明白人生注定是只属于自己的孤独的旅程,但是总是有很多时刻,我还是需要他人的情感支持。
也许人生就是这样,当你选择做一个随风飘流的小船儿时,你注定就会有遗憾。
回顾我的二十四年,遗憾其实还不少。譬如小时候喜欢围棋却没有学习资源没有老师,只能自己靠着对“围”的理解瞎捣鼓;譬如小学自己模仿qq三国画了几十张地图,还设计了完善的装备 剧情 战斗系统 (虽然所有玩法都是基于掷骰子的随机性哈哈哈),但是大学却没有想过(或者说没有勇气?)去找游戏方面的工作,曾经的那些地图也都没有保存下来;再比如小时候挺喜欢画画,曾经对着书上的风景画照着画,被美术老师以“你这肯定是贴着画直接抄的”为理由给了不及格,委屈的哭的要死,也顺带扼杀了我的美术乐趣;在比如大学唯一报名的辩论队,最后还是怂了没去面试。。。
大部分的遗憾其实还是源自自己的短视和懦弱、短视可能受家庭环境教育影响、但是懦弱纯粹就是自己的问题了。这也许也是为什么我看着白鹿原里的白孝文如此“触目惊心”、看明朝那些事那么振聋发聩了。所以我很喜欢那句自己说了七年的那句话:follow your heart。
那么就以这段话作为结尾、也作为对新年的希冀吧!不论要做什么决定,不用担心后果、不用考虑其他有的没得影响,只要记得:
唯愿此心无怨尤
Geometric deep learning means the deep learning especially for geometric objects, such as graphs and point clouds. Recently, their applications in VLSI become more and more common and important for several reasons (from my naive perspectives):
This artical is a summray of all related papers I read.
The two steps are in the flow of reverse engineering
reverse engineering is to obatin the circuit information (netlist, sub-circuit functions …) from the sold IC chip.
Netlist -> undirected graph.
Feature vectors (for each node):
I doubt these feature vectors may represent the original netlist with some potential bias information. That is, two different netlists may generate the same graph with same feature vectors. (Especially their type c features)
graph sampling (a large netlist graph to many sub-graphs): GraphSAINT, random walk sampler (RWS).
Models: after experiments, RWS sampling + GAT is used
given a certain design type, (like adder-mul,mix, results of two adders are fed into a multiplexer), generate a series of such designs, (by using different kinds of adders/multiplexers), see the node classification performance.
So the experiment for industrial design and even not for a general design.
Done by my friend :)
Arithmetic Block Identification: input-flatenned netlist graph; output-arithmatic block (the same with last paper)
flattened gate-level netlist, where only primitive gates are instanced, while the design hierarchy is unknown.
we obtain an input set and an output set (by ABGNN), and we need to find the matching, i.e., a mapping from input set to output set. (formulate as a maximum flow problem)
It is intresting to see that the task is divided into two steps, where the first one is translated as a DL task while the second one is solved by some traditional beautiful algorithm.
Just list some interesting parts:
A very famous but also controversial paper by Google
placing netlists onto chip canvases (two-dimensional grids) so that performance metrics (for example, power consumption, timing, area and wirelength) are optimized, while adhering to hard constraints on density and routing congestion.
Use GNN to obtain graph embedding, which is used to 1) generate next step instruction (as the encoder of the RL network) 2) predict the reward.
For the RL module, each action corresponds to selecting a macro and place it in the grid. Here, the macro order is determined heuristically.
Everytime when I try to initialize a pytorch tensor, I would like to use torch.zeros
:
1 | sum_each_bbox = torch.zeros(len(bbox),len(bbox[0])) |
But when I try to assign interger value to the tensor, like:
1 | sum_each_bbox[i,j] = sum(bbox[i][j]) |
The value will be different:
1 | tensor(16906704.) |
Instead, we should specify the data type to make it assigned correctly.
1 | sum_each_bbox = torch.zeros(len(bbox),len(bbox[0]),dtype=torch.int64) |
In python, we sometimes need to define an empty 2-D list to store something, i.e., tensors for different sizes at different grid point in the 2-D list.
Using Google, you may initialize the list in this way:
1 | a = [0*dim_0]*dim_1 |
In this way, the reference of the second dim is the same, when you change a[0][1], a[0][0] will also be changed. (if a stores tensors)
The correct way is:
1 | for i in range(dim_0): |
hey neway,现在是2021年的九月,我来CMU的第一个月。虽然很久没有写随笔,但是心里一直是有打草稿挂住你的哈哈,请见谅我的懒惰和不负责。
记录一些从上次随笔到现在我人生的一些容易忘记的进程吧
我本来想回顾一下自己在香港的七年,不过后面理了一遍自己波澜不惊的小半段人生,发现比起把笔墨用在我自己,还不如记录一下幸运的我碰到的那么多需要感激的人和事。
按照时间顺序的话,我首先应当感谢宋庆龄基金会的两位老师。我谢谢老师们对我的知遇之恩,谢谢他们始终相信我能做一个对社会有用的人,谢谢他们在我愁云密闭的时候给了我一束阳光。
然后是需要谢谢组妈组爸,和所有在我初入香港时帮助我的同学。从ocamp带我们熟悉学校香港,到后面的带我们去办身份证,一切都是那么顺其自然水到渠成。直到来到匹兹堡,我才明白完全没有同伴来到一片完全陌生的土地,是需要多少时间精力和试错。原来之前的顺利不是因为自己独立能力多么多么强,只是因为有你们帮助,仅此而已。
接着是感谢琦腿、凯灿、博一,宋庆龄基金会认识的同学,以及所有在大学时候曾经一起恰同学少年的伙伴们。正因为有你们,我的大学才显得没那么枯燥冷淡。在感谢一次博一吧,谢谢大学期间每次借钱借作业都毫不犹豫,谢谢让我知道世上真的会有永远不会生气永远温暖温柔的人。
再然后是带我入坑并且给了我很多科研指导的Beibei,lingming老师,yibo老师,和宇哲。因为老师们的手把手教学(真的是问题方法都摆在这的手把手教学。。手动捂脸🤦♂️),过去的三年是我很充实的三年,也是获得自我成就感的三年。看着beibei lingming从当年的AP变成现在炙手可热的领域巨星,只有两个想法:1。respect(刘邦) 2。 俺也想(项羽🐶)。
还要感谢荣哥和叶辉,虽然只做了大概半年的室友,但是我已经完全感受到了你们的善良、温柔、和对我的包容。
接着是913的伙伴们。谢谢你们再我难过时倾听我的心声、谢谢你们无条件的接纳包容我的无礼幼稚、谢谢你们所有人的善良友好和温暖。如果我是世界规则的制造者,我多希望我可以一辈子呆在913。
然后再谢谢一些帮助过我的香港人。比如文林堂的永远热情友好的舍友们。比如捡到我钱包交到警局的某位好心路人。比如我丢了钱包帮我找帮我问的小巴司机大伯。比如所有在我问路时热情指路的爷爷奶奶们。因为有你们,我对香港永远是爱意多过恨意,因为你们,香港才会成为我的第二故乡。
最后我还想感谢这七年里某些时间段的自己。比如准备宋庆龄基金会面试的自己、比如沉浸科研熬夜改paper的自己、比如思考人生怎么变得更幸福的自己。
除去感谢当时的人和事,另外一个值得我提笔留念的应该是一些还记得的画面。就当是文字化的备份。
我还记得初到文林堂,把新生扒光(只剩内裤)倒水问问题的狂野,也是那天我多了neway这个名字。关于文林堂的画面还有很多,再比如一起练舞的那个大厅、两位local室友的友善、文林自带的小柜台、吃了很多次的印尼捞面、去教堂周会抄的山路小道,等等。虽然嘴上说着分到文林堂很倒霉,但是现在想来,他们的友好热情、工友的勤劳善良,不是理所当然,而是我的幸运。
还记得在lab赶due的那些夜晚,基本上每次回去都有月亮作伴。所以后来读到苏轼的“多谢残灯不嫌客,孤舟一夜许相依”才会有种似曾相识的感觉。印象最深的还是3100 和小灰灰、鹏神、凯灿、书赫的proj,虽然整个大学基本是以废物的状态度过的,但是那几天拼搏的感觉真好哪。
还记得CCMSA组织的各种活动。包括那次露营的那片星空。
还记得自己从几个相声里面偷些梗结合起来,自己亲自说的一次相声。
还记得通关了N次的三国志真三国无双系列。。。丞相没做到的我倒是帮他实现很多次了。。。
1 | pwd = print working directory |
1 | foo=bar |
$1
to $9
- Arguments to the script. $1
is the first argument and so on.$@
- All the arguments$?
- Return code of the previous command!!
- Entire last command, including arguments. A common pattern is to execute a command only for it to fail due to missing permissions; you can quickly re-execute the command with sudo by doing sudo !!
$_
- Last argument from the last command. If you are in an interactive shell, you can also quickly get this value by typing Esc
followed by .
1 | #!/bin/bash |
1 | mv *{.py,.sh} folder |
1 | # Show differences between files in foo and bar |
used to select the interpreter of the code (if python script, then the interpreter should be python)
#!/usr/bin/env python3
– Execute with a Python interpreter, using the env program search path to find it#!/bin/bash
– Execute the file using the Bash shell1 | # Delete all files with .tmp extension |
sed
: a command that edits file without opening ittmux
: terminal multiplexer