Word Embeddings: Encoding Lexical Semantics

Posted czhwust

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Word Embeddings: Encoding Lexical Semantics相关的知识,希望对你有一定的参考价值。

Word Embeddings: Encoding Lexical Semantics

Word Embeddings in Pytorch

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

torch.manual_seed(1)

word_to_ix = {"hello": 0, "world": 1}
embeds = nn.Embedding(2, 5)  # 2 words in vocab, 5 dimensional embeddings
lookup_tensor = torch.tensor([word_to_ix["hello"]], dtype=torch.long)
hello_embed = embeds(lookup_tensor)
print(hello_embed)

Out:

tensor([[ 0.6614,  0.2669,  0.0617,  0.6213, -0.4519]],
       grad_fn=<EmbeddingBackward>)

An Example: N-Gram Language Modeling

CONTEXT_SIZE = 2
EMBEDDING_DIM = 10
# We will use Shakespeare Sonnet 2
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty‘s field,
Thy youth‘s proud livery so gazed on now,
Will be a totter‘d weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv‘d thy beauty‘s use,
If thou couldst answer ‘This fair child of mine
Shall sum my count, and make my old excuse,‘
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel‘st it cold.""".split()
# we should tokenize the input, but we will ignore that for now
# build a list of tuples.  Each tuple is ([ word_i-2, word_i-1 ], target word)
trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
            for i in range(len(test_sentence) - 2)]

vocab = set(test_sentence) #the element in set is distinct
word_to_ix = {word: i for i, word in enumerate(vocab)}

class NGramLanguageModeler(nn.Module):

    def __init__(self, vocab_size, embedding_dim, context_size):
        super(NGramLanguageModeler, self).__init__()
        self.embeddings = nn.Embedding(vocab_size, embedding_dim)
        self.linear1 = nn.Linear(context_size * embedding_dim, 128)
        self.linear2 = nn.Linear(128, vocab_size)

    def forward(self, inputs):
        embeds = self.embeddings(inputs).view((1, -1))
        out = F.relu(self.linear1(embeds))
        out = self.linear2(out)
        log_probs = F.log_softmax(out, dim=1)
        return log_probs

losses = []
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)
optimizer = optim.SGD(model.parameters(), lr=0.001)

for epoch in range(10):
    total_loss = 0
    for context, target in trigrams:

        context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)

        model.zero_grad()

        log_probs = model(context_idxs)
      
        loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))
       
        loss.backward()
        optimizer.step()
        
        total_loss += loss.item()
    losses.append(total_loss)
print(losses)  

Exercise: Computing Word Embeddings: Continuous Bag-of-Words

CONTEXT_SIZE=2
raw_text= """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split()


# By deriving a set from `raw_text`, we deduplicate the array
vocab = set(raw_text)
vocab_size = len(vocab)

word_to_ix={word:i for i,word in enumerate(vocab)}
data=[]
for i in range(2,len(raw_text)-2):
    context=[raw_text[i-2],raw_text[i-1],raw_text[i+1],raw_text[i+2]]
    target=raw_text[i]
    data.append((context,target))
print(data[:5])

class CBOW(nn.Module):
    def __init__(self):
        pass
    
    def forward(self,inputs):
        pass

def make_context_vector(context,word_to_ix):
    idxs=[word_to_ix[w] for w in context]
    return torch.tensor(idxs,dtype=torch.long)

make_context_vector(data[0][0],word_to_ix)

 

以上是关于Word Embeddings: Encoding Lexical Semantics的主要内容,如果未能解决你的问题,请参考以下文章

Bert4keras解决Key bert/embeddings/word_embeddings not found in checkpoint

Bert4keras解决Key bert/embeddings/word_embeddings not found in checkpoint

Word Embeddings

各种预训练的词向量(Pretrained Word Embeddings)

各种预训练的词向量(Pretrained Word Embeddings)

PyTorch笔记 - Word Embeddings & Word2vec 原理与源码