Documentation
¶
Overview ¶
Example ¶
package main import ( "fmt" "github.com/Soontao/jiebago/tokenizers" ) func main() { sentence := []byte("永和服装饰品有限公司") // default mode tokenizer, _ := tokenizers.NewJiebaTokenizer("../dict.txt", true, false) fmt.Println("Default Mode:") for _, token := range tokenizer.Tokenize(sentence) { fmt.Printf( "Term: %s Start: %d End: %d Position: %d Type: %d\n", token.Term, token.Start, token.End, token.Position, token.Type) } //search mode tokenizer, _ = tokenizers.NewJiebaTokenizer("../dict.txt", true, true) fmt.Println("Search Mode:") for _, token := range tokenizer.Tokenize(sentence) { fmt.Printf( "Term: %s Start: %d End: %d Position: %d Type: %d\n", token.Term, token.Start, token.End, token.Position, token.Type) } }
Output: Default Mode: Term: 永和 Start: 0 End: 6 Position: 1 Type: 1 Term: 服装 Start: 6 End: 12 Position: 2 Type: 1 Term: 饰品 Start: 12 End: 18 Position: 3 Type: 1 Term: 有限公司 Start: 18 End: 30 Position: 4 Type: 1 Search Mode: Term: 永和 Start: 0 End: 6 Position: 1 Type: 1 Term: 服装 Start: 6 End: 12 Position: 2 Type: 1 Term: 饰品 Start: 12 End: 18 Position: 3 Type: 1 Term: 有限 Start: 18 End: 24 Position: 4 Type: 1 Term: 公司 Start: 24 End: 30 Position: 5 Type: 1 Term: 有限公司 Start: 18 End: 30 Position: 6 Type: 1
Example (BeleveSearch) ¶
package main import ( "fmt" "log" "os" _ "github.com/Soontao/jiebago/tokenizers" "github.com/blevesearch/bleve" _ "github.com/blevesearch/bleve/analysis/analyzer/custom" ) func main() { // open a new index indexMapping := bleve.NewIndexMapping() err := indexMapping.AddCustomTokenizer("jieba", map[string]interface{}{ "file": "../dict.txt", "type": "jieba", }) if err != nil { log.Fatal(err) } // create a custom analyzer err = indexMapping.AddCustomAnalyzer("jieba", map[string]interface{}{ "type": "custom", "tokenizer": "jieba", "token_filters": []string{ "possessive_en", "to_lower", "stop_en", }, }) if err != nil { log.Fatal(err) } indexMapping.DefaultAnalyzer = "jieba" cacheDir := "jieba.beleve" os.RemoveAll(cacheDir) index, err := bleve.New(cacheDir, indexMapping) if err != nil { log.Fatal(err) } docs := []struct { Title string Name string }{ { Title: "Doc 1", Name: "This is the first document we’ve added", }, { Title: "Doc 2", Name: "The second one 你 中文测试中文 is even more interesting! 吃水果", }, { Title: "Doc 3", Name: "买水果然后来世博园。", }, { Title: "Doc 4", Name: "工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作", }, { Title: "Doc 5", Name: "咱俩交换一下吧。", }, } // index docs for _, doc := range docs { index.Index(doc.Title, doc) } // search for some text for _, keyword := range []string{"水果世博园", "你", "first", "中文", "交换机", "交换"} { query := bleve.NewQueryStringQuery(keyword) search := bleve.NewSearchRequest(query) search.Highlight = bleve.NewHighlight() searchResults, err := index.Search(search) if err != nil { log.Fatal(err) } fmt.Printf("Result of \"%s\": %d matches:\n", keyword, searchResults.Total) for i, hit := range searchResults.Hits { rv := fmt.Sprintf("%d. %s, (%f)\n", i+searchResults.Request.From+1, hit.ID, hit.Score) for fragmentField, fragments := range hit.Fragments { rv += fmt.Sprintf("%s: ", fragmentField) for _, fragment := range fragments { rv += fmt.Sprintf("%s", fragment) } } fmt.Printf("%s\n", rv) } } }
Output:
Index ¶
Examples ¶
Constants ¶
View Source
const Name = "jieba"
Name is the jieba tokenizer name.
Variables ¶
This section is empty.
Functions ¶
func JiebaTokenizerConstructor ¶
func JiebaTokenizerConstructor(config map[string]interface{}, cache *registry.Cache) ( analysis.Tokenizer, error)
JiebaTokenizerConstructor creates a JiebaTokenizer.
Parameter config should contains at least one parameter:
file: the path of the dictionary file. hmm: optional, specify whether to use Hidden Markov Model, see NewJiebaTokenizer for details. search: optional, speficy whether to use search mode, see NewJiebaTokenizer for details.
func NewJiebaTokenizer ¶
NewJiebaTokenizer creates a new JiebaTokenizer.
Parameters:
dictFilePath: path of the dictioanry file. hmm: whether to use Hidden Markov Model to cut unknown words, i.e. not found in dictionary. For example word "安卓" (means "Android" in English) not in the dictionary file. If hmm is set to false, it will be cutted into two single words "安" and "卓", if hmm is set to true, it will be traded as one single word because Jieba using Hidden Markov Model with Viterbi algorithm to guess the best possibility. searchMode: whether to further cut long words into serveral short words. In Chinese, some long words may contains other words, for example "交换机" is a Chinese word for "Switcher", if sechMode is false, it will trade "交换机" as a single word. If searchMode is true, it will further split this word into "交换", "换机", which are valid Chinese words.
Types ¶
type JiebaTokenizer ¶
type JiebaTokenizer struct {
// contains filtered or unexported fields
}
JiebaTokenizer is the beleve tokenizer for jiebago.
func (*JiebaTokenizer) Tokenize ¶
func (jt *JiebaTokenizer) Tokenize(input []byte) analysis.TokenStream
Tokenize cuts input into bleve token stream.
Click to show internal directories.
Click to hide internal directories.