0%

很小的时候,我妈工作的医院经常有人闹事。每次家属把棺材摆在医院门口,横幅也是贴的“张灯结彩”,严重的几次我妈也要暂时回老家避避风头。那时没人管这种事,每次医院都是掏钱了事。

但是因为这样按闹分配的处理方式,后来摆棺材的人越来越多。最后一次摆棺材,院长实在受不了了,带上几十位医生和医生家属跟他们打了一架。从那之后,这种为了讹钱的医闹反倒再也没发生过。

就像乔峰聚贤庄被人津津乐道一样,那时在医院家属小朋友的圈子里,也流传着干架那次杨叔杨家拳一干三的神话。逐渐长大才懂得,所谓的神话,其实是医护工作者“被逼上梁山”的无奈。寒窗苦读十几年教的是救人之术,没有哪本书哪门课告诉这些医生怎么对付无赖,怎么保护自己。

医院终究还是充满人生悲喜剧的地方,那样无赖式的讹钱医闹少了,但是这样恶魔式的冲动杀人怎么办呢。为钱的医闹还可以有杨叔这类人来保护,但为命的由谁来保护?或者说索命的当下,谁来得及保护?

就像前段时间沸沸扬扬的pua事件一样,我也不知道怎么回答这个问题。只能希望未来越来越好,也希望各位能保持纯真善良,做一个积极正义的人。共勉。

Overview

The paper explored the feasibility for ML techniques to support resource management algorithms. The main challenges are firstly discussed, then serveral potential remedies are proposed for these challenges, finally a proof-of-concept level experiment is conducted which demonstrates the feasibility.

Adv.

  • The paper is well orgnized and the writing is good, especially, the figures and tables in the paper is well designed, which makes me enjoy the reading.
  • The discussion of the feasibility by IL (imitation learning) and RL (Reinforcement learning) is impressive and the disadvantages of these methods hit the point (in my opinion)
  • The usage of domain knowledge is well surveyed, the challenges and corresponding proposed remedies are inspiring and the tricks for solving realistic constraints can be also used in other projects.

Drawbacks

  • Though RL is hard to formulate a reward function, RL is still can be adopted in this direction and deserves a deeper discussion in the future.
  • Fig 5 is not neccesary since the result table does not belong to this work.

Writing mistakes

  • page1. reached reached -> reached
  • page4. inference time, performance and energy -> inference time, power and energy (It I understand it correctly)
  • page6. The second challenge are model -> The second challenge is model

For More information on GNN, please refer to here

General GNN

No normalization:
$$ {w}
H^{(k+1)} = activation((A+I)H^kW) \A = adjacent \ matrix \H^0 = features \ of \ graph \ nodes \W = trainable \ variables
$$
GCN can be also explained in a message-passing way where the intermediate representations can be viewed as messages.

The aggregation is the actual message-passing phase and each node passes its message to its neighbors along the edge ($X = (A+I)H^k$).

The encoder is served as the integration phase, in which each node integrates received the message and reduces it into its new message ($activation(XW)$).

Each message-pass and integration phase formulate one GCN layer.

The representation after the final layer is called the node embedding of each node and the graph embedding by GCN is usually obtained by a summation or mean operation using node embeddings.

How about HetGNN?

What is Het Graph?

Multiple kinds of edges, multiple kinds of nodes.

image-20191217134320619

meta-path

  • composed of edge types. e.t. author-paper-author
  • meta-path indicates some sematic information. e.t. author-paper-author = co-author relationship/ paper contribution relationship
  • multiple meta-paths (actually exponential to edge types ). (author-paper,author-paper-author,author-paper-venue,author-paper-venue-paper-author…)

IF we use general GNN to Het Graph…

  • How to decide $A$, the adjacent matrix (there are multiple adjacent matrices for multiple types of edges)
  • How to decide $H^0$, the input features of graph nodes. Different kinds of nodes have different kinds of features.

Also, there are some special points to be noted in Het Graph

  • The number of direct neighbors are significantly unbalanced in some cases. (academic graph for example)
  • The format of input feature may be different ( image, video, text… )

How to solve the problem?

Some more/modified steps are introduced:

  • [new] Reduce input content to vector feature

  • [modified] Decide neighbors (proper adjacent matrix $A$) to aggregate and aggregate neighbor features by different types (node type/edge type/meta path type)

  • [modified] encode new features by aggregated neighbor features

  • [new] encode new node feature by the generated new features of different types

Input Output RGCN HetGNN(KDD 19) HetGAN(WWW 19) GTN(NIPS 19) General GCN
[new] Reduce input content to vector feature of node Input contents of each node (figs, texts…) vector feature of each node NA contents->content features->node features(by biLSTM) NA NA NA
[modified] Decide neighbors (proper adjacent matrix $A$) to aggregate and aggregate neighbor features by different types (node type/edge type/meta path type) the graph for each node $v$, tell him which nodes should be selected to receive there messages? Direct neighbor, select type-based neighbors with fixed size by random walk, Manually selected meta-path neighbors attention based meta-path(meta path is represented as a weighed multiplication of different edge type’s $A$) Direct neighbor,
[modified] encode new features by aggregated neighbor features Received messages of different type. Encoded message of corresponding type linear transformation BiLSTM attention sum with activation linear transformation linear transformation
[new] encode new node feature by the generated new features of different types Encoded messages of different types Final new feature(message) of this node $v$ sum attention sum attention sum with activation concat NA
neighbor type NA NA edge type node type meta path type meta path type NA

Performance comparison

  • [new] Reduce input content to vector feature
    • biLSTM(0.67) > fc (0.65) [HetGNN]
  • [modified] Decide neighbors (proper adjacent matrix $A$) to aggregate and aggregate neighbor features by different types (node type/edge type/meta path type)
  • [modified] encode new features by aggregated neighbor features
  • [new] encode new node feature by the generated new features of different types
    • attention sum(0.768) > fc (0.763) [HetGNN]
  • One more attention is always better… (icluding GTN)

趁着DAC赶完 学期结束的空闲期做了些之前一直想做的,比如switch的健身环,重拾篮球和读书。

准确的来说,是读历史。再准确来说,是读楚汉争霸和两晋的历史。

读楚汉是很早就有的想法了,虽然现在已没有了英雄梦,但是对于英雄的崇拜还是源于基因的。从王侯将相宁有种乎到人为刀俎我为鱼肉,那段历史就和我喜欢的三国一样,也是英雄辈出的时代,因此很早就想知道当年的汉初三杰是什么水平,力拔山兮的霸王怎样霸气,最后又何以落得乌江自刎的下场。

这段历史现在回味起来,颇有三国最精彩的那段:官渡到五丈原的翻版意味,高潮迭起,英雄辈出,就算当看个热闹也是很不错的故事。

不过除了看个热闹,也是有几个点让人印象深刻。(吐槽一下自己的文字,现在真有点人不像人鬼不像鬼的意思,中文写的也没当年引经据典那味,英文也是一塌糊涂,芝麻没捡着,西瓜倒是丢的差不多了。譬如这个印象深刻,我中文顶多在蹦出个震撼,卧槽之类的俗语,英文就更简单了,一个impressive对应所有同类词。实在是惭愧。)

一个是知人善任和知遇之恩。古代的儒家思想是很讲究知遇之恩的,很小就听过报君黄金台上意,当时只觉着这人豪气,现在也有点感同身受的意味,当你自己都怀疑自己,快要放弃自己的时候,有个人看中你真是莫大的荣幸与幸运,不过过去那些天纵英才们的知遇之恩和我这凡夫俗子还是不大一样,人家是有真才实学报国无门,我的真才实学还在花苞里还未萌芽呢!国士无双如韩信,也差点未得赏识而埋没一生,我们这种庸人不得赏识不是很正常的吗,况且就算牛如韩信,最终也不是个完美结局。有时候真不用强求太多东西,轮到你了你好好选,如王猛择主一样,没轮到你也不要怨天尤人,哭天呛地,有什么需要争的时候就变味了,挣到手了香不香也不一定了。不过如果轮到你给你机会了,就要明了这份机会的沉重,所谓君以国士待我,我必国士报之。

我现在只需要明白个知遇之恩,未来也许也用不上知人善任。不过还是得感慨,驭人识人之术真是作为领导的第一属性。就算你西楚霸王大小七十余战战无不胜又怎样呢,刘邦被你好几次打的抛妻弃子又怎样呢,比不上刘邦会识人会用人。。。而且不要脸啊。作为领导,或者说教授,很忌讳的一点就是刚愎自用,毕竟当领导久了,听得到的,听得多的都是吹捧,加之能当上领导水平肯定是有的,所以很自然的容易高估自己水平,低估别人水平,也不容易听得进别人的逆言。(关于这一点韩信同学是占了很多次大便宜的)。以后要是有信当了教授,希望自己不要成为一个给人标签化的“领导”,随便给学生贴上“不靠谱“的标签,轻易否决他的各种意见,止不准看歪的是我,结果被对面的韩信把idea”偷走“了。

不像三国如电视剧般冗长的故事线,楚汉战争应该算作电影,从项羽称盟主到彻底覆灭也就短短五年,因此整个楚汉争霸历史更多的算作一个战争史,里面的游击战广积粮什么的,虽然我也看得起劲,热血沸腾,但充其量也就看个热闹,真正研究这个的是要去打仗的同学,比如当年的毛主席?两晋的历史就截然相反,不是一部战争大片,非得灌个名的话,应该叫做。。权力伦理电视剧?

里面由于礼仪崩坏导致的各种毁三观的事件就不说了,幸好我生在一个讲究社会主义核心价值观的年代。(seriously)。我感触最深的还是苻坚吧。之前其实并不了解这人,甚至对王猛也知之甚少,只知道一个肥水之战,草木皆兵。但读了这段历史才感慨这人,也许是和其他变态的鲜明对比,苻坚的出场就是带着主角光环,正义使者身份出道的。百姓安乐,政治清平,任人唯贤,可以说就像从前故事里的那个真命天子一样:好像这个人就是受天命来当皇帝的。就像一般故事里,刘邦会遇上萧何,朱元璋会碰到刘伯温,苻坚也碰到了统率智力拉满的王猛,两人如鱼得水,北方评定,再定蜀汉(有点西晋的味道),可惜还没等到评定江南,王猛去世,苻坚没听王猛遗言先铲羌、胡,非急着做大一统皇帝。这里感慨一下,苻坚什么都好(一些私人性取向另谈),z就是自己水平不够,另外就是太过”儒家“,主动投降的拉去斩了,战到最后一刻或者背叛自己的反倒高官厚禄。。。

淝水一战输的也很典型。典型的多打少惨败。看了这么多多打少,无非就两种,物理打击(水淹火计)什么的,或者士气战,明明统一军心吐个口水就能赢,总是会有各种原因士气崩了。比如苻坚这个。。。对面渡河不击(反观刘邦韩信就不跟你讲这些了,只要你上我道 渡河了, 我必阴你),反而还退军百里给对方布阵,结果对面的假降将在自家后方传播谣言大军败退辣,然后士气就崩了。看的时候真的是又气又恨,不禁感慨要统一全国是得怎样的天才才能做到。所以想想失败,哪怕是再也不能卷土重来的那种一败涂地,也不是什么大不了的事,人生在世,和为成功何为失败,人各有言,想要做到一些”大事“也不是想做、看起来能做就一定做的成的,而做不成就算失败么?也不见得吧,至少在我看来苻坚不是称职的领袖,但也担的上一个问心无愧之称了。想想就算是诸葛亮,也是长叹于长坂坡呢。

历史长河,说到底还是人性的故事,精彩而又发人深省,这也是历史的魅力所在吧!

This family tree is only recorded by my supervisor and me such that it cannot certainly contain every famous professor due to my ignorance. Especially, this tree focuses on Chinese schloars…

Update 2023.07. During DAC, I am happy to know that this EDA tree has been viewed by many distinguished scholars. If there is any update you want me to edit, feel free to contact me weili3@andrew.cmu.edu

MIT

Stanford

Berkeley

CMU

UIUC

OpenMPL 也算是告了一段落,回过头去看自己写的code的时候,就像父母看着自己那不成器但是好歹戴上了博士帽的孩子一样:虽然臭了点但毕竟也是自己亲养的啊。

这种很engineering的活对我来说确实有很多个难熬的夜晚,对着layout的真实color(万幸我还有这么个label)和自己孩子的color 一个图一个图的看,到底是哪出问题了呢。。。写dancing link的时候也是一层一层(depth)的去看,这层怎么就到这个点了。。。

但终究还是过来了。而且还阴差阳错发现了之前的理论问题,propose了一个新的alg,算是一个双胞胎了哈哈。真有点塞翁失马焉知非福的意思,也让我更加笃信不要去争去在意外在的事物,船到桥头它自然会直,千金散尽我们也总有办法,这就是人的魅力才是鸭。

说到MPL本身,这个topic在ILP的统治下也确实很难有新的突破:一个算法已经在可以接受的时间里面能找到最优解了,那么你能突破的只有runtime了。不过可以遇见的解决方法就是GCN吧?毕竟GPU加成下的GCN runtime上肯定可以吊打ILP/DL,不过话说回来,倘若GCN成功的nearly optimal解决了MPL,那可以发的就不仅仅是DAC,而是AI甚至数学的顶会了,毕竟MPL是进阶版的coloring problem。

附上Yibo的official release,舒服了。

We are pleased to announce that OpenMPL 2.0 has been release. Please checkout our Github repository and ASICON slides.

Github: https://github.com/limbo018/OpenMPL
Slides: http://yibolin.com/publicati…/…/DFM_ASICON2019_Li.slides.pdf

OpenMPL is an open-source layout decomposition tool for multiple patterning lithography. You are welcome to try it out!

This release includes many new features.

* Triple and quadruple patterning decomposition
* Decomposition of contact and metal layers
* Stitch insertion
* Density balancing
* State-of-the-art algorithms: Backtrack, ILP, SDP, Dancing Links
* Multiple levels of graph simplifications
* Multithreading

RNG还是输了,昨晚看着一点点的崩塌,一直到结束都不敢相信。

去年是带着美好的梦去的,然后就像呵护了一年的魔盒被打碎了。

今年没有什么希冀,本来赛前就一直告诉自己我软泥怪打酱油的。

但是输的时候,还是意难平。

也无需安慰自己什么这就是竞技体育的魅力。也许没机会看小狗继续秀了。anyway,生活也要继续,加油吧。

今天固态到了,于是想着把系统装到固态里面,可谓一波三折,装了许多linux发行版,都记录下把

ubuntu18

Gnome: 还是有cpu占用率 50%+的bug,

Manjaro KDE

漂亮是漂亮,但是各种常用按键都要自己设定???(包括但不限于workspace切换,快捷打开terminal)

Manjaro xKDE(忘名字了)

高分屏极度不友好。醉了。

Manjaro clinnama

高分屏友好 按键可以设定 算是比较满意的了。。但是dock不方便实现,也是醉了。

Deepin

确实挺漂亮的。但是没有wifi(所有debian based通病)+ 不方便安装显卡驱动(官方的不可以安装,可能是proxy问题?)

阶段性总结

一天下来还是没有满意的,明天打算尝试 manjaro + gnome + macOS ,不行就manjaro + deepin

总之几个痛点是一定要满足的

  • WIFI(不然学校的proxy实在是太。。。)
  • 高分屏适配
  • workspace,terminal等快捷键
  • 其他的比如dock这种美观的就没办法了。哎做人真难,谁让我有强迫症呢

折腾之终章

这是两个月后的总结:

windows 万岁。

另外附一个win10 修改全局字体为苹方的链接吧 (苹方实在是太适合高分屏了)

戳我

中大CS的office台式是需要proxy才可以连接外网。但是ubuntu现在apt默认不是用的全局proxy,现在记下这些操作,以方便后人。

首先添加proxy到apt config

1
sudo vim /etc/apt/apt.conf.d/proxy.conf

在新建prox.conf中添加CUHK CSE proxy:

1
2
3
#注意一定是http://proxy...而不是proxy..
Acquire::http::Proxy "http://proxy.cse.cuhk.edu.hk:8000";
Acquire::https::Proxy "http://proxy.cse.cuhk.edu.hk:8000";

接着实际添加源的时候

1
sudo -E add-apt-repository ppa:xxx

-E: 保留当前的环境变量而不被重置(不加的话 http_proxy这些都不可以了)

回退版本

1
git reset --hard HEAD^

HEAD^ 上一个版本。

HAED^^ 上两个版本

HEAD~100 上100个版本

查看历史命令

1
git reflog

Workflow

git-repo

撤销还未添加到stage(git add)的修改

1
git checkout -- file

该命令实际上是 用版本库里的版本替换工作区的版本(比如工作区误删,同样可以git checkout)

撤销已经添加到stage(git add)的修改

1
git reset HEAD file

查看远程库的信息

1
git remote -v

Git pull issues

  • This is no tracking information for the current branch

    1
    git branch --set-upstream-to=origin/dev dev
  • conflicts