본문 바로가기

All

BiDAF 리뷰 및 기록 Bi-Directional Attention Flow For Machine Comprehension We introduce the Bi-Directional Attention Flow (BiDAF) network, a hierarchical muti-stage architecture for modeling the representations of the context paragraph at different levels. BiDAF includes..
zsh, oh-my-zsh에서 home/end 키가 안될 때 그냥 개인 기록용...환경:Ubuntu 16.04Xshell 을 이용한 ssh 연결zsh shell과 oh-my-zsh 를 설치후 사용 중문제:vi/vim 내에선 home/end 키가 정상적으로 작동shell 에서 home/end 키가 원하는대로 작동하지 않음home 키 입력 시 아무일도 일어나지 않음end 키 입력 시 ~ 문자 삽입이 일어남해결:vi ~/.zshrc아래 두줄 추가bindkey "\033[1~" beginning-of-linebind..
[cs224n] Lecture 9 Machine Translation and Advanced Recurrent LSTMs and GRUs Machine Translation and Advanced Recurrent LSTMs and GRUs [slide] [video] Gated Recurrent Units by Cho et al. (2014) Long-Short-Term-Memories by Hochreiter and Schmidhuber (1997) Recap Word2Vec Jt(θ)=log⁡σ(uoTvc)+∑j−P(w)[log⁡σ(−ujTv..
Fake News Detection on Social Media: A Data Mining Perspective Fake News Detection on Social Media: A Data Mining Perspective paper | dataset Fake News detection 문제를 2가지 관점으로 본다 Characterization Detection Contribution Discuss narrow ans broad definitions of fake news that cover most existing..
[cs224n] Lecture 4 Word Window Classification and Neural Networks Word Window Classification and Neural Networks Overview Classification background Updating word vectors for classificaiton Window classification & cross entropy derivation tips A single layer neural network Max-Margin loss ans ba..
Enriching Word Vectors with Subword Information Enriching Word Vectors with Subword Information Word embedding 방법 중 FastText에 대한 리뷰입니다. paper | code Model Take into account morphology Consider subword units Represent words by a sum of its character n-grams skip-gram introduce..
[cs224n] Lecture 1 Natural Language Processing with Deep Learning Natural Language Processing with Deep Learning최근에 NLP 관련해서 stanford의 cs224n 강의를 듣게 됐다. 강의 하나씩 들으면서 간단하게 정리도 하면서 나중에 다시 볼 겸 블로그에 남기려고 한다. Plan What is Natural Language Processing? The nature of human language What is Deep Learning Why ..
Joint Many-Task(JMT) Model 관련 paper 리뷰 A Joint Many-Task Model:Growing a Neural Network for Multiple NLP TasksKazuma Hashimoto, caiming Xiong, Yoshimasa Tsuruoka, and Richard SocherThe University of TokyoEMNLP 2017 Accepted이번에 EMNLP 2017에 올라온 논문들을 살펴보다가, Growing Neural Network와 Multiple NLP ..