use GOPROXY

This commit is contained in:
Cadey Ratio 2018-10-19 06:58:02 -07:00
parent f363c7e7eb
commit d2ff440799
114 changed files with 5 additions and 20900 deletions

View File

@ -1,7 +1,8 @@
FROM xena/go:1.11 AS build
COPY . /root/go/src/github.com/Xe/site
WORKDIR /root/go/src/github.com/Xe/site
RUN GO111MODULE=on CGO_ENABLED=0 GOBIN=/root go install -v -mod=vendor ./cmd/site
FROM xena/go:1.11.1 AS build
ENV GOPROXY https://cache.greedo.xeserv.us
COPY . /site
WORKDIR /site
RUN CGO_ENABLED=0 GOBIN=/root go install -v ./cmd/site
FROM xena/alpine
EXPOSE 5000

View File

@ -1 +0,0 @@
*.so

View File

@ -1,19 +0,0 @@
Copyright (c) 2017 Christine Dodrill <me@christine.website>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -1,51 +0,0 @@
gopreload
=========
An emulation of the linux libc `LD_PRELOAD` except for use with Go plugins for
the addition of instrumentation and debugging utilities.
## Pluginizer
`pluginizer` is a bit of glue that makes it easier to turn underscore imports
into plugins:
```console
$ go get github.com/Xe/gopreload/cmd/pluginizer
$ pluginizer -help
Usage of pluginizer:
-dest string
destination package to generate
-pkg string
package to underscore import
$ pluginizer -pkg github.com/lib/pq -dest github.com/Xe/gopreload/database/postgres
To build this plugin:
$ go build -buildmode plugin -o /path/to/output.so github.com/Xe/gopreload/database/postgres
```
### Database drivers
I have included plugin boilerplate autogenned versions of the sqlite, postgres
and mysql database drivers.
## Manhole
[`manhole`][manhole] is an example of debugging and introspection tooling that has
been useful when debugging past issues with long-running server processes.
## Security Implications
This package assumes that programs run using it are never started with environment
variables that are set by unauthenticated users. Any errors in loading the plugins
will be logged using the standard library logger `log` and ignored.
This has about the same security implications as [`LD_PRELOAD`][ld-preload] does in most
Linux distributions, but the risk is minimal compared to the massive benefit for
being able to have arbitrary background services all be able to be dug into using
the same tooling or being able to have metric submission be completely separated
from the backend metric creation. Common logging setup processes can be _always_
loaded, making the default logger settings into the correct settings.
---
[manhole]: https://github.com/Xe/gopreload/tree/master/manhole
[ld-preload]: https://rafalcieslak.wordpress.com/2013/04/02/dynamic-linker-tricks-using-ld_preload-to-cheat-inject-features-and-investigate-programs/

View File

@ -1,7 +0,0 @@
/*
Package gopreload is a bit of a hack to emulate the behavior of LD_PRELOAD [ld-preload].
This allows you to have automatically starting instrumentation, etc.
[ld-preload]: http://man7.org/linux/man-pages/man8/ld.so.8.html (see LD_PRELOAD section)
*/
package gopreload

View File

@ -1,26 +0,0 @@
//+build linux,go1.8
package gopreload
import (
"log"
"os"
"plugin"
"strings"
)
func init() {
gpv := os.Getenv("GO_PRELOAD")
if gpv == "" {
return
}
for _, elem := range strings.Split(gpv, ",") {
log.Printf("gopreload: trying to open: %s", elem)
_, err := plugin.Open(elem)
if err != nil {
log.Printf("%v from GO_PRELOAD cannot be loaded: %v", elem, err)
continue
}
}
}

View File

@ -1,17 +0,0 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/
language: go
go:
- 1.8.1
before_install:
- go get -t -v ./...
script:
- go test -race -coverprofile=coverage.txt -covermode=atomic
after_success:
- bash <(curl -s https://codecov.io/bash)

363
vendor/github.com/Xe/jsonfeed/LICENSE generated vendored
View File

@ -1,363 +0,0 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.

View File

@ -1,8 +0,0 @@
# JSONFeed - Go Package to parse JSON Feed streams
[![Build Status](https://travis-ci.org/st3fan/jsonfeed.svg?branch=master)](https://travis-ci.org/st3fan/jsonfeed) [![Go Report Card](https://goreportcard.com/badge/github.com/st3fan/jsonfeed)](https://goreportcard.com/report/github.com/st3fan/jsonfeed) [![codecov](https://codecov.io/gh/st3fan/jsonfeed/branch/master/graph/badge.svg)](https://codecov.io/gh/st3fan/jsonfeed)
*Stefan Arentz, May 2017*
Work in progress. Minimal package to parse JSON Feed streams. Please file feature requests.

View File

@ -1,73 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at http://mozilla.org/MPL/2.0/
package jsonfeed
import (
"encoding/json"
"io"
"time"
)
const CurrentVersion = "https://jsonfeed.org/version/1"
type Item struct {
ID string `json:"id"`
URL string `json:"url"`
ExternalURL string `json:"external_url"`
Title string `json:"title"`
ContentHTML string `json:"content_html"`
ContentText string `json:"content_text"`
Summary string `json:"summary"`
Image string `json:"image"`
BannerImage string `json:"banner_image"`
DatePublished time.Time `json:"date_published"`
DateModified time.Time `json:"date_modified"`
Author Author `json:"author"`
Tags []string `json:"tags"`
}
type Author struct {
Name string `json:"name"`
URL string `json:"url"`
Avatar string `json:"avatar"`
}
type Hub struct {
Type string `json:"type"`
URL string `json:"url"`
}
type Attachment struct {
URL string `json:"url"`
MIMEType string `json:"mime_type"`
Title string `json:"title"`
SizeInBytes int64 `json:"size_in_bytes"`
DurationInSeconds int64 `json:"duration_in_seconds"`
}
type Feed struct {
Version string `json:"version"`
Title string `json:"title"`
HomePageURL string `json:"home_page_url"`
FeedURL string `json:"feed_url"`
Description string `json:"description"`
UserComment string `json:"user_comment"`
NextURL string `json:"next_url"`
Icon string `json:"icon"`
Favicon string `json:"favicon"`
Author Author `json:"author"`
Expired bool `json:"expired"`
Hubs []Hub `json:"hubs"`
Items []Item `json:"items"`
}
func Parse(r io.Reader) (Feed, error) {
var feed Feed
decoder := json.NewDecoder(r)
if err := decoder.Decode(&feed); err != nil {
return Feed{}, err
}
return feed, nil
}

25
vendor/github.com/Xe/ln/LICENSE generated vendored
View File

@ -1,25 +0,0 @@
Copyright (c) 2015, Andrew Gwozdziewycz
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

29
vendor/github.com/Xe/ln/README.md generated vendored
View File

@ -1,29 +0,0 @@
# ln: The Natural Logger for Go
`ln` provides a simple interface to logging, and metrics, and
obviates the need to utilize purpose built metrics packages, like
`go-metrics` for simple use cases.
The design of `ln` centers around the idea of key-value pairs, which
can be interpreted on the fly, but "Filters" to do things such as
aggregated metrics, and report said metrics to, say Librato, or
statsd.
"Filters" are like WSGI, or Rack Middleware. They are run "top down"
and can abort an emitted log's output at any time, or continue to let
it through the chain. However, the interface is slightly different
than that. Rather than encapsulating the chain with partial function
application, we utilize a simpler method, namely, each plugin defines
an `Apply` function, which takes as an argument the log event, and
performs the work of the plugin, only if the Plugin "Applies" to this
log event.
If `Apply` returns `false`, the iteration through the rest of the
filters is aborted, and the log is dropped from further processing.
## Current Status: Initial Development / Concept
## Copyright
(c) 2015, Andrew Gwozdziewycz, BSD Licensed. See LICENSE for more
info.

11
vendor/github.com/Xe/ln/action.go generated vendored
View File

@ -1,11 +0,0 @@
package ln
// Action is a convenience helper for logging the "action" being performed as
// part of a log line.
//
// It is a convenience wrapper for the following:
//
// ln.Log(ctx, fer, f, ln.Action("writing frozberry sales reports to database"))
func Action(act string) Fer {
return F{"action": act}
}

38
vendor/github.com/Xe/ln/context.go generated vendored
View File

@ -1,38 +0,0 @@
package ln
import (
"context"
)
type ctxKey int
const (
fKey = iota
)
// WithF stores or appends a given F instance into a context.
func WithF(ctx context.Context, f F) context.Context {
pf, ok := FFromContext(ctx)
if !ok {
return context.WithValue(ctx, fKey, f)
}
pf.Extend(f)
return context.WithValue(ctx, fKey, pf)
}
// FFromContext fetches the `F` out of the context if it exists.
func FFromContext(ctx context.Context) (F, bool) {
fvp := ctx.Value(fKey)
if fvp == nil {
return nil, false
}
f, ok := fvp.(F)
if !ok {
return nil, false
}
return f, true
}

25
vendor/github.com/Xe/ln/doc.go generated vendored
View File

@ -1,25 +0,0 @@
/*
Package ln is the Natural Logger for Go
`ln` provides a simple interface to logging, and metrics, and
obviates the need to utilize purpose built metrics packages, like
`go-metrics` for simple use cases.
The design of `ln` centers around the idea of key-value pairs, which
can be interpreted on the fly, but "Filters" to do things such as
aggregated metrics, and report said metrics to, say Librato, or
statsd.
"Filters" are like WSGI, or Rack Middleware. They are run "top down"
and can abort an emitted log's output at any time, or continue to let
it through the chain. However, the interface is slightly different
than that. Rather than encapsulating the chain with partial function
application, we utilize a simpler method, namely, each plugin defines
an `Apply` function, which takes as an argument the log event, and
performs the work of the plugin, only if the Plugin "Applies" to this
log event.
If `Apply` returns `false`, the iteration through the rest of the
filters is aborted, and the log is dropped from further processing.
*/
package ln

67
vendor/github.com/Xe/ln/filter.go generated vendored
View File

@ -1,67 +0,0 @@
package ln
import (
"context"
"io"
"sync"
)
// Filter interface for defining chain filters
type Filter interface {
Apply(ctx context.Context, e Event) bool
Run()
Close()
}
// FilterFunc allows simple functions to implement the Filter interface
type FilterFunc func(ctx context.Context, e Event) bool
// Apply implements the Filter interface
func (ff FilterFunc) Apply(ctx context.Context, e Event) bool {
return ff(ctx, e)
}
// Run implements the Filter interface
func (ff FilterFunc) Run() {}
// Close implements the Filter interface
func (ff FilterFunc) Close() {}
// WriterFilter implements a filter, which arbitrarily writes to an io.Writer
type WriterFilter struct {
sync.Mutex
Out io.Writer
Formatter Formatter
}
// NewWriterFilter creates a filter to add to the chain
func NewWriterFilter(out io.Writer, format Formatter) *WriterFilter {
if format == nil {
format = DefaultFormatter
}
return &WriterFilter{
Out: out,
Formatter: format,
}
}
// Apply implements the Filter interface
func (w *WriterFilter) Apply(ctx context.Context, e Event) bool {
output, err := w.Formatter.Format(ctx, e)
if err == nil {
w.Lock()
w.Out.Write(output)
w.Unlock()
}
return true
}
// Run implements the Filter interface
func (w *WriterFilter) Run() {}
// Close implements the Filter interface
func (w *WriterFilter) Close() {}
// NilFilter is safe to return as a Filter, but does nothing
var NilFilter = FilterFunc(func(_ context.Context, e Event) bool { return true })

111
vendor/github.com/Xe/ln/formatter.go generated vendored
View File

@ -1,111 +0,0 @@
package ln
import (
"bytes"
"context"
"fmt"
"time"
)
var (
// DefaultTimeFormat represents the way in which time will be formatted by default
DefaultTimeFormat = time.RFC3339
)
// Formatter defines the formatting of events
type Formatter interface {
Format(ctx context.Context, e Event) ([]byte, error)
}
// DefaultFormatter is the default way in which to format events
var DefaultFormatter Formatter
func init() {
DefaultFormatter = NewTextFormatter()
}
// TextFormatter formats events as key value pairs.
// Any remaining text not wrapped in an instance of `F` will be
// placed at the end.
type TextFormatter struct {
TimeFormat string
}
// NewTextFormatter returns a Formatter that outputs as text.
func NewTextFormatter() Formatter {
return &TextFormatter{TimeFormat: DefaultTimeFormat}
}
// Format implements the Formatter interface
func (t *TextFormatter) Format(_ context.Context, e Event) ([]byte, error) {
var writer bytes.Buffer
writer.WriteString("time=\"")
writer.WriteString(e.Time.Format(t.TimeFormat))
writer.WriteString("\"")
keys := make([]string, len(e.Data))
i := 0
for k := range e.Data {
keys[i] = k
i++
}
for _, k := range keys {
v := e.Data[k]
writer.WriteByte(' ')
if shouldQuote(k) {
writer.WriteString(fmt.Sprintf("%q", k))
} else {
writer.WriteString(k)
}
writer.WriteByte('=')
switch v.(type) {
case string:
vs, _ := v.(string)
if shouldQuote(vs) {
fmt.Fprintf(&writer, "%q", vs)
} else {
writer.WriteString(vs)
}
case error:
tmperr, _ := v.(error)
es := tmperr.Error()
if shouldQuote(es) {
fmt.Fprintf(&writer, "%q", es)
} else {
writer.WriteString(es)
}
case time.Time:
tmptime, _ := v.(time.Time)
writer.WriteString(tmptime.Format(time.RFC3339))
default:
fmt.Fprint(&writer, v)
}
}
if len(e.Message) > 0 {
fmt.Fprintf(&writer, " _msg=%q", e.Message)
}
writer.WriteByte('\n')
return writer.Bytes(), nil
}
func shouldQuote(s string) bool {
for _, b := range s {
if !((b >= 'A' && b <= 'Z') ||
(b >= 'a' && b <= 'z') ||
(b >= '0' && b <= '9') ||
(b == '-' || b == '.' || b == '#' ||
b == '/' || b == '_')) {
return true
}
}
return false
}

180
vendor/github.com/Xe/ln/logger.go generated vendored
View File

@ -1,180 +0,0 @@
package ln
import (
"context"
"os"
"time"
"github.com/pkg/errors"
)
// Logger holds the current priority and list of filters
type Logger struct {
Filters []Filter
}
// DefaultLogger is the default implementation of Logger
var DefaultLogger *Logger
func init() {
var defaultFilters []Filter
// Default to STDOUT for logging, but allow LN_OUT to change it.
out := os.Stdout
if os.Getenv("LN_OUT") == "<stderr>" {
out = os.Stderr
}
defaultFilters = append(defaultFilters, NewWriterFilter(out, nil))
DefaultLogger = &Logger{
Filters: defaultFilters,
}
}
// F is a key-value mapping for structured data.
type F map[string]interface{}
// Extend concatentates one F with one or many Fer instances.
func (f F) Extend(other ...Fer) {
for _, ff := range other {
for k, v := range ff.F() {
f[k] = v
}
}
}
// F makes F an Fer
func (f F) F() F {
return f
}
// Fer allows any type to add fields to the structured logging key->value pairs.
type Fer interface {
F() F
}
// Event represents an event
type Event struct {
Time time.Time
Data F
Message string
}
// Log is the generic logging method.
func (l *Logger) Log(ctx context.Context, xs ...Fer) {
event := Event{Time: time.Now()}
addF := func(bf F) {
if event.Data == nil {
event.Data = bf
} else {
for k, v := range bf {
event.Data[k] = v
}
}
}
for _, f := range xs {
addF(f.F())
}
ctxf, ok := FFromContext(ctx)
if ok {
addF(ctxf)
}
if os.Getenv("LN_DEBUG_ALL_EVENTS") == "1" {
frame := callersFrame()
if event.Data == nil {
event.Data = make(F)
}
event.Data["_lineno"] = frame.lineno
event.Data["_function"] = frame.function
event.Data["_filename"] = frame.filename
}
l.filter(ctx, event)
}
func (l *Logger) filter(ctx context.Context, e Event) {
for _, f := range l.Filters {
if !f.Apply(ctx, e) {
return
}
}
}
// Error logs an error and information about the context of said error.
func (l *Logger) Error(ctx context.Context, err error, xs ...Fer) {
data := F{}
frame := callersFrame()
data["_lineno"] = frame.lineno
data["_function"] = frame.function
data["_filename"] = frame.filename
data["err"] = err
cause := errors.Cause(err)
if cause != nil {
data["cause"] = cause.Error()
}
xs = append(xs, data)
l.Log(ctx, xs...)
}
// Fatal logs this set of values, then exits with status code 1.
func (l *Logger) Fatal(ctx context.Context, xs ...Fer) {
xs = append(xs, F{"fatal": true})
l.Log(ctx, xs...)
os.Exit(1)
}
// FatalErr combines Fatal and Error.
func (l *Logger) FatalErr(ctx context.Context, err error, xs ...Fer) {
xs = append(xs, F{"fatal": true})
data := F{}
frame := callersFrame()
data["_lineno"] = frame.lineno
data["_function"] = frame.function
data["_filename"] = frame.filename
data["err"] = err
cause := errors.Cause(err)
if cause != nil {
data["cause"] = cause.Error()
}
xs = append(xs, data)
l.Log(ctx, xs...)
os.Exit(1)
}
// Default Implementation
// Log is the generic logging method.
func Log(ctx context.Context, xs ...Fer) {
DefaultLogger.Log(ctx, xs...)
}
// Error logs an error and information about the context of said error.
func Error(ctx context.Context, err error, xs ...Fer) {
DefaultLogger.Error(ctx, err, xs...)
}
// Fatal logs this set of values, then exits with status code 1.
func Fatal(ctx context.Context, xs ...Fer) {
DefaultLogger.Fatal(ctx, xs...)
}
// FatalErr combines Fatal and Error.
func FatalErr(ctx context.Context, err error, xs ...Fer) {
DefaultLogger.FatalErr(ctx, err, xs...)
}

44
vendor/github.com/Xe/ln/stack.go generated vendored
View File

@ -1,44 +0,0 @@
package ln
import (
"os"
"runtime"
"strings"
)
type frame struct {
filename string
function string
lineno int
}
// skips 2 frames, since Caller returns the current frame, and we need
// the caller's caller.
func callersFrame() frame {
var out frame
pc, file, line, ok := runtime.Caller(3)
if !ok {
return out
}
srcLoc := strings.LastIndex(file, "/src/")
if srcLoc >= 0 {
file = file[srcLoc+5:]
}
out.filename = file
out.function = functionName(pc)
out.lineno = line
return out
}
func functionName(pc uintptr) string {
fn := runtime.FuncForPC(pc)
if fn == nil {
return "???"
}
name := fn.Name()
beg := strings.LastIndex(name, string(os.PathSeparator))
return name[beg+1:]
// end := strings.LastIndex(name, string(os.PathSeparator))
// return name[end+1 : len(name)]
}

View File

@ -1,27 +0,0 @@
Copyright (c) 2016 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,237 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package agent provides hooks programs can register to retrieve
// diagnostics data by using gops.
package agent
import (
"fmt"
"io"
"io/ioutil"
"net"
"os"
gosignal "os/signal"
"runtime"
"runtime/pprof"
"runtime/trace"
"strconv"
"sync"
"time"
"bufio"
"github.com/google/gops/internal"
"github.com/google/gops/signal"
"github.com/kardianos/osext"
)
const defaultAddr = "127.0.0.1:0"
var (
mu sync.Mutex
portfile string
listener net.Listener
units = []string{" bytes", "KB", "MB", "GB", "TB", "PB"}
)
// Options allows configuring the started agent.
type Options struct {
// Addr is the host:port the agent will be listening at.
// Optional.
Addr string
// NoShutdownCleanup tells the agent not to automatically cleanup
// resources if the running process receives an interrupt.
// Optional.
NoShutdownCleanup bool
}
// Listen starts the gops agent on a host process. Once agent started, users
// can use the advanced gops features. The agent will listen to Interrupt
// signals and exit the process, if you need to perform further work on the
// Interrupt signal use the options parameter to configure the agent
// accordingly.
//
// Note: The agent exposes an endpoint via a TCP connection that can be used by
// any program on the system. Review your security requirements before starting
// the agent.
func Listen(opts *Options) error {
mu.Lock()
defer mu.Unlock()
if opts == nil {
opts = &Options{}
}
if portfile != "" {
return fmt.Errorf("gops: agent already listening at: %v", listener.Addr())
}
gopsdir, err := internal.ConfigDir()
if err != nil {
return err
}
err = os.MkdirAll(gopsdir, os.ModePerm)
if err != nil {
return err
}
if !opts.NoShutdownCleanup {
gracefulShutdown()
}
addr := opts.Addr
if addr == "" {
addr = defaultAddr
}
ln, err := net.Listen("tcp", addr)
if err != nil {
return err
}
listener = ln
port := listener.Addr().(*net.TCPAddr).Port
portfile = fmt.Sprintf("%s/%d", gopsdir, os.Getpid())
err = ioutil.WriteFile(portfile, []byte(strconv.Itoa(port)), os.ModePerm)
if err != nil {
return err
}
go listen()
return nil
}
func listen() {
buf := make([]byte, 1)
for {
fd, err := listener.Accept()
if err != nil {
fmt.Fprintf(os.Stderr, "gops: %v", err)
if netErr, ok := err.(net.Error); ok && !netErr.Temporary() {
break
}
continue
}
if _, err := fd.Read(buf); err != nil {
fmt.Fprintf(os.Stderr, "gops: %v", err)
continue
}
if err := handle(fd, buf); err != nil {
fmt.Fprintf(os.Stderr, "gops: %v", err)
continue
}
fd.Close()
}
}
func gracefulShutdown() {
c := make(chan os.Signal, 1)
gosignal.Notify(c, os.Interrupt)
go func() {
// cleanup the socket on shutdown.
<-c
Close()
os.Exit(1)
}()
}
// Close closes the agent, removing temporary files and closing the TCP listener.
// If no agent is listening, Close does nothing.
func Close() {
mu.Lock()
defer mu.Unlock()
if portfile != "" {
os.Remove(portfile)
portfile = ""
}
if listener != nil {
listener.Close()
}
}
func formatBytes(val uint64) string {
var i int
var target uint64
for i = range units {
target = 1 << uint(10*(i+1))
if val < target {
break
}
}
if i > 0 {
return fmt.Sprintf("%0.2f%s (%d bytes)", float64(val)/(float64(target)/1024), units[i], val)
}
return fmt.Sprintf("%d bytes", val)
}
func handle(conn io.Writer, msg []byte) error {
switch msg[0] {
case signal.StackTrace:
return pprof.Lookup("goroutine").WriteTo(conn, 2)
case signal.GC:
runtime.GC()
_, err := conn.Write([]byte("ok"))
return err
case signal.MemStats:
var s runtime.MemStats
runtime.ReadMemStats(&s)
fmt.Fprintf(conn, "alloc: %v\n", formatBytes(s.Alloc))
fmt.Fprintf(conn, "total-alloc: %v\n", formatBytes(s.TotalAlloc))
fmt.Fprintf(conn, "sys: %v\n", formatBytes(s.Sys))
fmt.Fprintf(conn, "lookups: %v\n", s.Lookups)
fmt.Fprintf(conn, "mallocs: %v\n", s.Mallocs)
fmt.Fprintf(conn, "frees: %v\n", s.Frees)
fmt.Fprintf(conn, "heap-alloc: %v\n", formatBytes(s.HeapAlloc))
fmt.Fprintf(conn, "heap-sys: %v\n", formatBytes(s.HeapSys))
fmt.Fprintf(conn, "heap-idle: %v\n", formatBytes(s.HeapIdle))
fmt.Fprintf(conn, "heap-in-use: %v\n", formatBytes(s.HeapInuse))
fmt.Fprintf(conn, "heap-released: %v\n", formatBytes(s.HeapReleased))
fmt.Fprintf(conn, "heap-objects: %v\n", s.HeapObjects)
fmt.Fprintf(conn, "stack-in-use: %v\n", formatBytes(s.StackInuse))
fmt.Fprintf(conn, "stack-sys: %v\n", formatBytes(s.StackSys))
fmt.Fprintf(conn, "next-gc: when heap-alloc >= %v\n", formatBytes(s.NextGC))
lastGC := "-"
if s.LastGC != 0 {
lastGC = fmt.Sprint(time.Unix(0, int64(s.LastGC)))
}
fmt.Fprintf(conn, "last-gc: %v\n", lastGC)
fmt.Fprintf(conn, "gc-pause: %v\n", time.Duration(s.PauseTotalNs))
fmt.Fprintf(conn, "num-gc: %v\n", s.NumGC)
fmt.Fprintf(conn, "enable-gc: %v\n", s.EnableGC)
fmt.Fprintf(conn, "debug-gc: %v\n", s.DebugGC)
case signal.Version:
fmt.Fprintf(conn, "%v\n", runtime.Version())
case signal.HeapProfile:
pprof.WriteHeapProfile(conn)
case signal.CPUProfile:
if err := pprof.StartCPUProfile(conn); err != nil {
return err
}
time.Sleep(30 * time.Second)
pprof.StopCPUProfile()
case signal.Stats:
fmt.Fprintf(conn, "goroutines: %v\n", runtime.NumGoroutine())
fmt.Fprintf(conn, "OS threads: %v\n", pprof.Lookup("threadcreate").Count())
fmt.Fprintf(conn, "GOMAXPROCS: %v\n", runtime.GOMAXPROCS(0))
fmt.Fprintf(conn, "num CPU: %v\n", runtime.NumCPU())
case signal.BinaryDump:
path, err := osext.Executable()
if err != nil {
return err
}
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
_, err = bufio.NewReader(f).WriteTo(conn)
return err
case signal.Trace:
trace.Start(conn)
time.Sleep(5 * time.Second)
trace.Stop()
}
return nil
}

View File

@ -1,52 +0,0 @@
package internal
import (
"errors"
"fmt"
"io/ioutil"
"os"
"os/user"
"path/filepath"
"runtime"
"strings"
)
func ConfigDir() (string, error) {
if runtime.GOOS == "windows" {
return filepath.Join(os.Getenv("APPDATA"), "gops"), nil
}
homeDir := guessUnixHomeDir()
if homeDir == "" {
return "", errors.New("unable to get current user home directory: os/user lookup failed; $HOME is empty")
}
return filepath.Join(homeDir, ".config", "gops"), nil
}
func guessUnixHomeDir() string {
usr, err := user.Current()
if err == nil {
return usr.HomeDir
}
return os.Getenv("HOME")
}
func PIDFile(pid int) (string, error) {
gopsdir, err := ConfigDir()
if err != nil {
return "", err
}
return fmt.Sprintf("%s/%d", gopsdir, pid), nil
}
func GetPort(pid int) (string, error) {
portfile, err := PIDFile(pid)
if err != nil {
return "", err
}
b, err := ioutil.ReadFile(portfile)
if err != nil {
return "", err
}
port := strings.TrimSpace(string(b))
return port, nil
}

View File

@ -1,35 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package signal contains signals used to communicate to the gops agents.
package signal
const (
// StackTrace represents a command to print stack trace.
StackTrace = byte(0x1)
// GC runs the garbage collector.
GC = byte(0x2)
// MemStats reports memory stats.
MemStats = byte(0x3)
// Version prints the Go version.
Version = byte(0x4)
// HeapProfile starts `go tool pprof` with the current memory profile.
HeapProfile = byte(0x5)
// CPUProfile starts `go tool pprof` with the current CPU profile
CPUProfile = byte(0x6)
// Stats returns Go runtime statistics such as number of goroutines, GOMAXPROCS, and NumCPU.
Stats = byte(0x7)
// Trace starts the Go execution tracer, waits 5 seconds and launches the trace tool.
Trace = byte(0x8)
// BinaryDump returns running binary file.
BinaryDump = byte(0x9)
)

View File

@ -1,15 +0,0 @@
language: go
sudo: false
matrix:
include:
- go: 1.7
- go: 1.8
- go: 1.x
- go: master
allow_failures:
- go: master
script:
- go get -t -v ./...
- diff -u <(echo -n) <(gofmt -d -s .)
- go tool vet .
- go test -v -race ./...

View File

@ -1,22 +0,0 @@
Copyright (c) 2013 The Gorilla Feeds Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,184 +0,0 @@
## gorilla/feeds
[![GoDoc](https://godoc.org/github.com/gorilla/feeds?status.svg)](https://godoc.org/github.com/gorilla/feeds)
feeds is a web feed generator library for generating RSS, Atom and JSON feeds from Go
applications.
### Goals
* Provide a simple interface to create both Atom & RSS 2.0 feeds
* Full support for [Atom][atom], [RSS 2.0][rss], and [JSON Feed Version 1][jsonfeed] spec elements
* Ability to modify particulars for each spec
[atom]: https://tools.ietf.org/html/rfc4287
[rss]: http://www.rssboard.org/rss-specification
[jsonfeed]: https://jsonfeed.org/version/1
### Usage
```go
package main
import (
"fmt"
"log"
"time"
"github.com/gorilla/feeds"
)
func main() {
now := time.Now()
feed := &feeds.Feed{
Title: "jmoiron.net blog",
Link: &feeds.Link{Href: "http://jmoiron.net/blog"},
Description: "discussion about tech, footie, photos",
Author: &feeds.Author{Name: "Jason Moiron", Email: "jmoiron@jmoiron.net"},
Created: now,
}
feed.Items = []*feeds.Item{
&feeds.Item{
Title: "Limiting Concurrency in Go",
Link: &feeds.Link{Href: "http://jmoiron.net/blog/limiting-concurrency-in-go/"},
Description: "A discussion on controlled parallelism in golang",
Author: &feeds.Author{Name: "Jason Moiron", Email: "jmoiron@jmoiron.net"},
Created: now,
},
&feeds.Item{
Title: "Logic-less Template Redux",
Link: &feeds.Link{Href: "http://jmoiron.net/blog/logicless-template-redux/"},
Description: "More thoughts on logicless templates",
Created: now,
},
&feeds.Item{
Title: "Idiomatic Code Reuse in Go",
Link: &feeds.Link{Href: "http://jmoiron.net/blog/idiomatic-code-reuse-in-go/"},
Description: "How to use interfaces <em>effectively</em>",
Created: now,
},
}
atom, err := feed.ToAtom()
if err != nil {
log.Fatal(err)
}
rss, err := feed.ToRss()
if err != nil {
log.Fatal(err)
}
json, err := feed.ToJSON()
if err != nil {
log.Fatal(err)
}
fmt.Println(atom, "\n", rss, "\n", json)
}
```
Outputs:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>jmoiron.net blog</title>
<link href="http://jmoiron.net/blog"></link>
<id>http://jmoiron.net/blog</id>
<updated>2013-01-16T03:26:01-05:00</updated>
<summary>discussion about tech, footie, photos</summary>
<entry>
<title>Limiting Concurrency in Go</title>
<link href="http://jmoiron.net/blog/limiting-concurrency-in-go/"></link>
<updated>2013-01-16T03:26:01-05:00</updated>
<id>tag:jmoiron.net,2013-01-16:/blog/limiting-concurrency-in-go/</id>
<summary type="html">A discussion on controlled parallelism in golang</summary>
<author>
<name>Jason Moiron</name>
<email>jmoiron@jmoiron.net</email>
</author>
</entry>
<entry>
<title>Logic-less Template Redux</title>
<link href="http://jmoiron.net/blog/logicless-template-redux/"></link>
<updated>2013-01-16T03:26:01-05:00</updated>
<id>tag:jmoiron.net,2013-01-16:/blog/logicless-template-redux/</id>
<summary type="html">More thoughts on logicless templates</summary>
<author></author>
</entry>
<entry>
<title>Idiomatic Code Reuse in Go</title>
<link href="http://jmoiron.net/blog/idiomatic-code-reuse-in-go/"></link>
<updated>2013-01-16T03:26:01-05:00</updated>
<id>tag:jmoiron.net,2013-01-16:/blog/idiomatic-code-reuse-in-go/</id>
<summary type="html">How to use interfaces &lt;em&gt;effectively&lt;/em&gt;</summary>
<author></author>
</entry>
</feed>
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
<channel>
<title>jmoiron.net blog</title>
<link>http://jmoiron.net/blog</link>
<description>discussion about tech, footie, photos</description>
<managingEditor>jmoiron@jmoiron.net (Jason Moiron)</managingEditor>
<pubDate>2013-01-16T03:22:24-05:00</pubDate>
<item>
<title>Limiting Concurrency in Go</title>
<link>http://jmoiron.net/blog/limiting-concurrency-in-go/</link>
<description>A discussion on controlled parallelism in golang</description>
<pubDate>2013-01-16T03:22:24-05:00</pubDate>
</item>
<item>
<title>Logic-less Template Redux</title>
<link>http://jmoiron.net/blog/logicless-template-redux/</link>
<description>More thoughts on logicless templates</description>
<pubDate>2013-01-16T03:22:24-05:00</pubDate>
</item>
<item>
<title>Idiomatic Code Reuse in Go</title>
<link>http://jmoiron.net/blog/idiomatic-code-reuse-in-go/</link>
<description>How to use interfaces &lt;em&gt;effectively&lt;/em&gt;</description>
<pubDate>2013-01-16T03:22:24-05:00</pubDate>
</item>
</channel>
</rss>
{
"version": "https://jsonfeed.org/version/1",
"title": "jmoiron.net blog",
"home_page_url": "http://jmoiron.net/blog",
"description": "discussion about tech, footie, photos",
"author": {
"name": "Jason Moiron"
},
"items": [
{
"id": "",
"url": "http://jmoiron.net/blog/limiting-concurrency-in-go/",
"title": "Limiting Concurrency in Go",
"summary": "A discussion on controlled parallelism in golang",
"date_published": "2013-01-16T03:22:24.530817846-05:00",
"author": {
"name": "Jason Moiron"
}
},
{
"id": "",
"url": "http://jmoiron.net/blog/logicless-template-redux/",
"title": "Logic-less Template Redux",
"summary": "More thoughts on logicless templates",
"date_published": "2013-01-16T03:22:24.530817846-05:00"
},
{
"id": "",
"url": "http://jmoiron.net/blog/idiomatic-code-reuse-in-go/",
"title": "Idiomatic Code Reuse in Go",
"summary": "How to use interfaces \u003cem\u003eeffectively\u003c/em\u003e",
"date_published": "2013-01-16T03:22:24.530817846-05:00"
}
]
}
```

View File

@ -1,164 +0,0 @@
package feeds
import (
"encoding/xml"
"fmt"
"net/url"
"time"
)
// Generates Atom feed as XML
const ns = "http://www.w3.org/2005/Atom"
type AtomPerson struct {
Name string `xml:"name,omitempty"`
Uri string `xml:"uri,omitempty"`
Email string `xml:"email,omitempty"`
}
type AtomSummary struct {
XMLName xml.Name `xml:"summary"`
Content string `xml:",chardata"`
Type string `xml:"type,attr"`
}
type AtomContent struct {
XMLName xml.Name `xml:"content"`
Content string `xml:",chardata"`
Type string `xml:"type,attr"`
}
type AtomAuthor struct {
XMLName xml.Name `xml:"author"`
AtomPerson
}
type AtomContributor struct {
XMLName xml.Name `xml:"contributor"`
AtomPerson
}
type AtomEntry struct {
XMLName xml.Name `xml:"entry"`
Xmlns string `xml:"xmlns,attr,omitempty"`
Title string `xml:"title"` // required
Updated string `xml:"updated"` // required
Id string `xml:"id"` // required
Category string `xml:"category,omitempty"`
Content *AtomContent
Rights string `xml:"rights,omitempty"`
Source string `xml:"source,omitempty"`
Published string `xml:"published,omitempty"`
Contributor *AtomContributor
Links []AtomLink // required if no child 'content' elements
Summary *AtomSummary // required if content has src or content is base64
Author *AtomAuthor // required if feed lacks an author
}
// Multiple links with different rel can coexist
type AtomLink struct {
//Atom 1.0 <link rel="enclosure" type="audio/mpeg" title="MP3" href="http://www.example.org/myaudiofile.mp3" length="1234" />
XMLName xml.Name `xml:"link"`
Href string `xml:"href,attr"`
Rel string `xml:"rel,attr,omitempty"`
Type string `xml:"type,attr,omitempty"`
Length string `xml:"length,attr,omitempty"`
}
type AtomFeed struct {
XMLName xml.Name `xml:"feed"`
Xmlns string `xml:"xmlns,attr"`
Title string `xml:"title"` // required
Id string `xml:"id"` // required
Updated string `xml:"updated"` // required
Category string `xml:"category,omitempty"`
Icon string `xml:"icon,omitempty"`
Logo string `xml:"logo,omitempty"`
Rights string `xml:"rights,omitempty"` // copyright used
Subtitle string `xml:"subtitle,omitempty"`
Link *AtomLink
Author *AtomAuthor `xml:"author,omitempty"`
Contributor *AtomContributor
Entries []*AtomEntry
}
type Atom struct {
*Feed
}
func newAtomEntry(i *Item) *AtomEntry {
id := i.Id
// assume the description is html
c := &AtomContent{Content: i.Description, Type: "html"}
if len(id) == 0 {
// if there's no id set, try to create one, either from data or just a uuid
if len(i.Link.Href) > 0 && (!i.Created.IsZero() || !i.Updated.IsZero()) {
dateStr := anyTimeFormat("2006-01-02", i.Updated, i.Created)
host, path := i.Link.Href, "/invalid.html"
if url, err := url.Parse(i.Link.Href); err == nil {
host, path = url.Host, url.Path
}
id = fmt.Sprintf("tag:%s,%s:%s", host, dateStr, path)
} else {
id = "urn:uuid:" + NewUUID().String()
}
}
var name, email string
if i.Author != nil {
name, email = i.Author.Name, i.Author.Email
}
link_rel := i.Link.Rel
if link_rel == "" {
link_rel = "alternate"
}
x := &AtomEntry{
Title: i.Title,
Links: []AtomLink{{Href: i.Link.Href, Rel: link_rel, Type: i.Link.Type}},
Content: c,
Id: id,
Updated: anyTimeFormat(time.RFC3339, i.Updated, i.Created),
}
if i.Enclosure != nil && link_rel != "enclosure" {
x.Links = append(x.Links, AtomLink{Href: i.Enclosure.Url, Rel: "enclosure", Type: i.Enclosure.Type, Length: i.Enclosure.Length})
}
if len(name) > 0 || len(email) > 0 {
x.Author = &AtomAuthor{AtomPerson: AtomPerson{Name: name, Email: email}}
}
return x
}
// create a new AtomFeed with a generic Feed struct's data
func (a *Atom) AtomFeed() *AtomFeed {
updated := anyTimeFormat(time.RFC3339, a.Updated, a.Created)
feed := &AtomFeed{
Xmlns: ns,
Title: a.Title,
Link: &AtomLink{Href: a.Link.Href, Rel: a.Link.Rel},
Subtitle: a.Description,
Id: a.Link.Href,
Updated: updated,
Rights: a.Copyright,
}
if a.Author != nil {
feed.Author = &AtomAuthor{AtomPerson: AtomPerson{Name: a.Author.Name, Email: a.Author.Email}}
}
for _, e := range a.Items {
feed.Entries = append(feed.Entries, newAtomEntry(e))
}
return feed
}
// return an XML-Ready object for an Atom object
func (a *Atom) FeedXml() interface{} {
return a.AtomFeed()
}
// return an XML-ready object for an AtomFeed object
func (a *AtomFeed) FeedXml() interface{} {
return a
}

View File

@ -1,73 +0,0 @@
/*
Syndication (feed) generator library for golang.
Installing
go get github.com/gorilla/feeds
Feeds provides a simple, generic Feed interface with a generic Item object as well as RSS, Atom and JSON Feed specific RssFeed, AtomFeed and JSONFeed objects which allow access to all of each spec's defined elements.
Examples
Create a Feed and some Items in that feed using the generic interfaces:
import (
"time"
. "github.com/gorilla/feeds
)
now = time.Now()
feed := &Feed{
Title: "jmoiron.net blog",
Link: &Link{Href: "http://jmoiron.net/blog"},
Description: "discussion about tech, footie, photos",
Author: &Author{Name: "Jason Moiron", Email: "jmoiron@jmoiron.net"},
Created: now,
Copyright: "This work is copyright © Benjamin Button",
}
feed.Items = []*Item{
&Item{
Title: "Limiting Concurrency in Go",
Link: &Link{Href: "http://jmoiron.net/blog/limiting-concurrency-in-go/"},
Description: "A discussion on controlled parallelism in golang",
Author: &Author{Name: "Jason Moiron", Email: "jmoiron@jmoiron.net"},
Created: now,
},
&Item{
Title: "Logic-less Template Redux",
Link: &Link{Href: "http://jmoiron.net/blog/logicless-template-redux/"},
Description: "More thoughts on logicless templates",
Created: now,
},
&Item{
Title: "Idiomatic Code Reuse in Go",
Link: &Link{Href: "http://jmoiron.net/blog/idiomatic-code-reuse-in-go/"},
Description: "How to use interfaces <em>effectively</em>",
Created: now,
},
}
From here, you can output Atom, RSS, or JSON Feed versions of this feed easily
atom, err := feed.ToAtom()
rss, err := feed.ToRss()
json, err := feed.ToJSON()
You can also get access to the underlying objects that feeds uses to export its XML
atomFeed := (&Atom{Feed: feed}).AtomFeed()
rssFeed := (&Rss{Feed: feed}).RssFeed()
jsonFeed := (&JSON{Feed: feed}).JSONFeed()
From here, you can modify or add each syndication's specific fields before outputting
atomFeed.Subtitle = "plays the blues"
atom, err := ToXML(atomFeed)
rssFeed.Generator = "gorilla/feeds v1.0 (github.com/gorilla/feeds)"
rss, err := ToXML(rssFeed)
jsonFeed.NextUrl = "https://www.example.com/feed.json?page=2"
json, err := jsonFeed.ToJSON()
*/
package feeds

View File

@ -1,135 +0,0 @@
package feeds
import (
"encoding/json"
"encoding/xml"
"io"
"time"
)
type Link struct {
Href, Rel, Type, Length string
}
type Author struct {
Name, Email string
}
type Image struct {
Url, Title, Link string
Width, Height int
}
type Enclosure struct {
Url, Length, Type string
}
type Item struct {
Title string
Link *Link
Source *Link
Author *Author
Description string // used as description in rss, summary in atom
Id string // used as guid in rss, id in atom
Updated time.Time
Created time.Time
Enclosure *Enclosure
}
type Feed struct {
Title string
Link *Link
Description string
Author *Author
Updated time.Time
Created time.Time
Id string
Subtitle string
Items []*Item
Copyright string
Image *Image
}
// add a new Item to a Feed
func (f *Feed) Add(item *Item) {
f.Items = append(f.Items, item)
}
// returns the first non-zero time formatted as a string or ""
func anyTimeFormat(format string, times ...time.Time) string {
for _, t := range times {
if !t.IsZero() {
return t.Format(format)
}
}
return ""
}
// interface used by ToXML to get a object suitable for exporting XML.
type XmlFeed interface {
FeedXml() interface{}
}
// turn a feed object (either a Feed, AtomFeed, or RssFeed) into xml
// returns an error if xml marshaling fails
func ToXML(feed XmlFeed) (string, error) {
x := feed.FeedXml()
data, err := xml.MarshalIndent(x, "", " ")
if err != nil {
return "", err
}
// strip empty line from default xml header
s := xml.Header[:len(xml.Header)-1] + string(data)
return s, nil
}
// Write a feed object (either a Feed, AtomFeed, or RssFeed) as XML into
// the writer. Returns an error if XML marshaling fails.
func WriteXML(feed XmlFeed, w io.Writer) error {
x := feed.FeedXml()
// write default xml header, without the newline
if _, err := w.Write([]byte(xml.Header[:len(xml.Header)-1])); err != nil {
return err
}
e := xml.NewEncoder(w)
e.Indent("", " ")
return e.Encode(x)
}
// creates an Atom representation of this feed
func (f *Feed) ToAtom() (string, error) {
a := &Atom{f}
return ToXML(a)
}
// Writes an Atom representation of this feed to the writer.
func (f *Feed) WriteAtom(w io.Writer) error {
return WriteXML(&Atom{f}, w)
}
// creates an Rss representation of this feed
func (f *Feed) ToRss() (string, error) {
r := &Rss{f}
return ToXML(r)
}
// Writes an RSS representation of this feed to the writer.
func (f *Feed) WriteRss(w io.Writer) error {
return WriteXML(&Rss{f}, w)
}
// ToJSON creates a JSON Feed representation of this feed
func (f *Feed) ToJSON() (string, error) {
j := &JSON{f}
return j.ToJSON()
}
// WriteJSON writes an JSON representation of this feed to the writer.
func (f *Feed) WriteJSON(w io.Writer) error {
j := &JSON{f}
feed := j.JSONFeed()
e := json.NewEncoder(w)
e.SetIndent("", " ")
return e.Encode(feed)
}

View File

@ -1,181 +0,0 @@
package feeds
import (
"encoding/json"
"strings"
"time"
)
const jsonFeedVersion = "https://jsonfeed.org/version/1"
// JSONAuthor represents the author of the feed or of an individual item
// in the feed
type JSONAuthor struct {
Name string `json:"name,omitempty"`
Url string `json:"url,omitempty"`
Avatar string `json:"avatar,omitempty"`
}
// JSONAttachment represents a related resource. Podcasts, for instance, would
// include an attachment thats an audio or video file.
type JSONAttachment struct {
Url string `json:"url,omitempty"`
MIMEType string `json:"mime_type,omitempty"`
Title string `json:"title,omitempty"`
Size int32 `json:"size,omitempty"`
Duration time.Duration `json:"duration_in_seconds,omitempty"`
}
// MarshalJSON implements the json.Marshaler interface.
// The Duration field is marshaled in seconds, all other fields are marshaled
// based upon the definitions in struct tags.
func (a *JSONAttachment) MarshalJSON() ([]byte, error) {
type EmbeddedJSONAttachment JSONAttachment
return json.Marshal(&struct {
Duration float64 `json:"duration_in_seconds,omitempty"`
*EmbeddedJSONAttachment
}{
EmbeddedJSONAttachment: (*EmbeddedJSONAttachment)(a),
Duration: a.Duration.Seconds(),
})
}
// UnmarshalJSON implements the json.Unmarshaler interface.
// The Duration field is expected to be in seconds, all other field types
// match the struct definition.
func (a *JSONAttachment) UnmarshalJSON(data []byte) error {
type EmbeddedJSONAttachment JSONAttachment
var raw struct {
Duration float64 `json:"duration_in_seconds,omitempty"`
*EmbeddedJSONAttachment
}
raw.EmbeddedJSONAttachment = (*EmbeddedJSONAttachment)(a)
err := json.Unmarshal(data, &raw)
if err != nil {
return err
}
if raw.Duration > 0 {
nsec := int64(raw.Duration * float64(time.Second))
raw.EmbeddedJSONAttachment.Duration = time.Duration(nsec)
}
return nil
}
// JSONItem represents a single entry/post for the feed.
type JSONItem struct {
Id string `json:"id"`
Url string `json:"url,omitempty"`
ExternalUrl string `json:"external_url,omitempty"`
Title string `json:"title,omitempty"`
ContentHTML string `json:"content_html,omitempty"`
ContentText string `json:"content_text,omitempty"`
Summary string `json:"summary,omitempty"`
Image string `json:"image,omitempty"`
BannerImage string `json:"banner_,omitempty"`
PublishedDate *time.Time `json:"date_published,omitempty"`
ModifiedDate *time.Time `json:"date_modified,omitempty"`
Author *JSONAuthor `json:"author,omitempty"`
Tags []string `json:"tags,omitempty"`
Attachments []JSONAttachment `json:"attachments,omitempty"`
}
// JSONHub describes an endpoint that can be used to subscribe to real-time
// notifications from the publisher of this feed.
type JSONHub struct {
Type string `json:"type"`
Url string `json:"url"`
}
// JSONFeed represents a syndication feed in the JSON Feed Version 1 format.
// Matching the specification found here: https://jsonfeed.org/version/1.
type JSONFeed struct {
Version string `json:"version"`
Title string `json:"title"`
HomePageUrl string `json:"home_page_url,omitempty"`
FeedUrl string `json:"feed_url,omitempty"`
Description string `json:"description,omitempty"`
UserComment string `json:"user_comment,omitempty"`
NextUrl string `json:"next_url,omitempty"`
Icon string `json:"icon,omitempty"`
Favicon string `json:"favicon,omitempty"`
Author *JSONAuthor `json:"author,omitempty"`
Expired *bool `json:"expired,omitempty"`
Hubs []*JSONItem `json:"hubs,omitempty"`
Items []*JSONItem `json:"items,omitempty"`
}
// JSON is used to convert a generic Feed to a JSONFeed.
type JSON struct {
*Feed
}
// ToJSON encodes f into a JSON string. Returns an error if marshalling fails.
func (f *JSON) ToJSON() (string, error) {
return f.JSONFeed().ToJSON()
}
// ToJSON encodes f into a JSON string. Returns an error if marshalling fails.
func (f *JSONFeed) ToJSON() (string, error) {
data, err := json.MarshalIndent(f, "", " ")
if err != nil {
return "", err
}
return string(data), nil
}
// JSONFeed creates a new JSONFeed with a generic Feed struct's data.
func (f *JSON) JSONFeed() *JSONFeed {
feed := &JSONFeed{
Version: jsonFeedVersion,
Title: f.Title,
Description: f.Description,
}
if f.Link != nil {
feed.HomePageUrl = f.Link.Href
}
if f.Author != nil {
feed.Author = &JSONAuthor{
Name: f.Author.Name,
}
}
for _, e := range f.Items {
feed.Items = append(feed.Items, newJSONItem(e))
}
return feed
}
func newJSONItem(i *Item) *JSONItem {
item := &JSONItem{
Id: i.Id,
Title: i.Title,
Summary: i.Description,
}
if i.Link != nil {
item.Url = i.Link.Href
}
if i.Source != nil {
item.ExternalUrl = i.Source.Href
}
if i.Author != nil {
item.Author = &JSONAuthor{
Name: i.Author.Name,
}
}
if !i.Created.IsZero() {
item.PublishedDate = &i.Created
}
if !i.Updated.IsZero() {
item.ModifiedDate = &i.Created
}
if i.Enclosure != nil && strings.HasPrefix(i.Enclosure.Type, "image/") {
item.Image = i.Enclosure.Url
}
return item
}

View File

@ -1,154 +0,0 @@
package feeds
// rss support
// validation done according to spec here:
// http://cyber.law.harvard.edu/rss/rss.html
import (
"encoding/xml"
"fmt"
"time"
)
// private wrapper around the RssFeed which gives us the <rss>..</rss> xml
type rssFeedXml struct {
XMLName xml.Name `xml:"rss"`
Version string `xml:"version,attr"`
Channel *RssFeed
}
type RssImage struct {
XMLName xml.Name `xml:"image"`
Url string `xml:"url"`
Title string `xml:"title"`
Link string `xml:"link"`
Width int `xml:"width,omitempty"`
Height int `xml:"height,omitempty"`
}
type RssTextInput struct {
XMLName xml.Name `xml:"textInput"`
Title string `xml:"title"`
Description string `xml:"description"`
Name string `xml:"name"`
Link string `xml:"link"`
}
type RssFeed struct {
XMLName xml.Name `xml:"channel"`
Title string `xml:"title"` // required
Link string `xml:"link"` // required
Description string `xml:"description"` // required
Language string `xml:"language,omitempty"`
Copyright string `xml:"copyright,omitempty"`
ManagingEditor string `xml:"managingEditor,omitempty"` // Author used
WebMaster string `xml:"webMaster,omitempty"`
PubDate string `xml:"pubDate,omitempty"` // created or updated
LastBuildDate string `xml:"lastBuildDate,omitempty"` // updated used
Category string `xml:"category,omitempty"`
Generator string `xml:"generator,omitempty"`
Docs string `xml:"docs,omitempty"`
Cloud string `xml:"cloud,omitempty"`
Ttl int `xml:"ttl,omitempty"`
Rating string `xml:"rating,omitempty"`
SkipHours string `xml:"skipHours,omitempty"`
SkipDays string `xml:"skipDays,omitempty"`
Image *RssImage
TextInput *RssTextInput
Items []*RssItem
}
type RssItem struct {
XMLName xml.Name `xml:"item"`
Title string `xml:"title"` // required
Link string `xml:"link"` // required
Description string `xml:"description"` // required
Author string `xml:"author,omitempty"`
Category string `xml:"category,omitempty"`
Comments string `xml:"comments,omitempty"`
Enclosure *RssEnclosure
Guid string `xml:"guid,omitempty"` // Id used
PubDate string `xml:"pubDate,omitempty"` // created or updated
Source string `xml:"source,omitempty"`
}
type RssEnclosure struct {
//RSS 2.0 <enclosure url="http://example.com/file.mp3" length="123456789" type="audio/mpeg" />
XMLName xml.Name `xml:"enclosure"`
Url string `xml:"url,attr"`
Length string `xml:"length,attr"`
Type string `xml:"type,attr"`
}
type Rss struct {
*Feed
}
// create a new RssItem with a generic Item struct's data
func newRssItem(i *Item) *RssItem {
item := &RssItem{
Title: i.Title,
Link: i.Link.Href,
Description: i.Description,
Guid: i.Id,
PubDate: anyTimeFormat(time.RFC1123Z, i.Created, i.Updated),
}
if i.Source != nil {
item.Source = i.Source.Href
}
// Define a closure
if i.Enclosure != nil && i.Enclosure.Type != "" && i.Enclosure.Length != "" {
item.Enclosure = &RssEnclosure{Url: i.Enclosure.Url, Type: i.Enclosure.Type, Length: i.Enclosure.Length}
}
if i.Author != nil {
item.Author = i.Author.Name
}
return item
}
// create a new RssFeed with a generic Feed struct's data
func (r *Rss) RssFeed() *RssFeed {
pub := anyTimeFormat(time.RFC1123Z, r.Created, r.Updated)
build := anyTimeFormat(time.RFC1123Z, r.Updated)
author := ""
if r.Author != nil {
author = r.Author.Email
if len(r.Author.Name) > 0 {
author = fmt.Sprintf("%s (%s)", r.Author.Email, r.Author.Name)
}
}
var image *RssImage
if r.Image != nil {
image = &RssImage{Url: r.Image.Url, Title: r.Image.Title, Link: r.Image.Link, Width: r.Image.Width, Height: r.Image.Height}
}
channel := &RssFeed{
Title: r.Title,
Link: r.Link.Href,
Description: r.Description,
ManagingEditor: author,
PubDate: pub,
LastBuildDate: build,
Copyright: r.Copyright,
Image: image,
}
for _, i := range r.Items {
channel.Items = append(channel.Items, newRssItem(i))
}
return channel
}
// return an XML-Ready object for an Rss object
func (r *Rss) FeedXml() interface{} {
// only generate version 2.0 feeds for now
return r.RssFeed().FeedXml()
}
// return an XML-ready object for an RssFeed object
func (r *RssFeed) FeedXml() interface{} {
return &rssFeedXml{Version: "2.0", Channel: r}
}

View File

@ -1,20 +0,0 @@
[Full iTunes list](https://help.apple.com/itc/podcasts_connect/#/itcb54353390)
[Example of ideal iTunes RSS feed](https://help.apple.com/itc/podcasts_connect/#/itcbaf351599)
```
<itunes:author>
<itunes:block>
<itunes:catergory>
<itunes:image>
<itunes:duration>
<itunes:explicit>
<itunes:isClosedCaptioned>
<itunes:order>
<itunes:complete>
<itunes:new-feed-url>
<itunes:owner>
<itunes:subtitle>
<itunes:summary>
<language>
```

View File

@ -1,27 +0,0 @@
package feeds
// relevant bits from https://github.com/abneptis/GoUUID/blob/master/uuid.go
import (
"crypto/rand"
"fmt"
)
type UUID [16]byte
// create a new uuid v4
func NewUUID() *UUID {
u := &UUID{}
_, err := rand.Read(u[:16])
if err != nil {
panic(err)
}
u[8] = (u[8] | 0x80) & 0xBf
u[6] = (u[6] | 0x40) & 0x4f
return u
}
func (u *UUID) String() string {
return fmt.Sprintf("%x-%x-%x-%x-%x", u[:4], u[4:6], u[6:8], u[8:10], u[10:])
}

View File

@ -1,27 +0,0 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,21 +0,0 @@
### Extensions to the "os" package.
[![GoDoc](https://godoc.org/github.com/kardianos/osext?status.svg)](https://godoc.org/github.com/kardianos/osext)
## Find the current Executable and ExecutableFolder.
As of go1.8 the Executable function may be found in `os`. The Executable function
in the std lib `os` package is used if available.
There is sometimes utility in finding the current executable file
that is running. This can be used for upgrading the current executable
or finding resources located relative to the executable file. Both
working directory and the os.Args[0] value are arbitrary and cannot
be relied on; os.Args[0] can be "faked".
Multi-platform and supports:
* Linux
* OS X
* Windows
* Plan 9
* BSDs.

View File

@ -1,33 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Extensions to the standard "os" package.
package osext // import "github.com/kardianos/osext"
import "path/filepath"
var cx, ce = executableClean()
func executableClean() (string, error) {
p, err := executable()
return filepath.Clean(p), err
}
// Executable returns an absolute path that can be used to
// re-invoke the current program.
// It may not be valid after the current program exits.
func Executable() (string, error) {
return cx, ce
}
// Returns same path as Executable, returns just the folder
// path. Excludes the executable name and any trailing slash.
func ExecutableFolder() (string, error) {
p, err := Executable()
if err != nil {
return "", err
}
return filepath.Dir(p), nil
}

View File

@ -1,9 +0,0 @@
//+build go1.8,!openbsd
package osext
import "os"
func executable() (string, error) {
return os.Executable()
}

View File

@ -1,22 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//+build !go1.8
package osext
import (
"os"
"strconv"
"syscall"
)
func executable() (string, error) {
f, err := os.Open("/proc/" + strconv.Itoa(os.Getpid()) + "/text")
if err != nil {
return "", err
}
defer f.Close()
return syscall.Fd2path(int(f.Fd()))
}

View File

@ -1,36 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !go1.8,android !go1.8,linux !go1.8,netbsd !go1.8,solaris !go1.8,dragonfly
package osext
import (
"errors"
"fmt"
"os"
"runtime"
"strings"
)
func executable() (string, error) {
switch runtime.GOOS {
case "linux", "android":
const deletedTag = " (deleted)"
execpath, err := os.Readlink("/proc/self/exe")
if err != nil {
return execpath, err
}
execpath = strings.TrimSuffix(execpath, deletedTag)
execpath = strings.TrimPrefix(execpath, deletedTag)
return execpath, nil
case "netbsd":
return os.Readlink("/proc/curproc/exe")
case "dragonfly":
return os.Readlink("/proc/curproc/file")
case "solaris":
return os.Readlink(fmt.Sprintf("/proc/%d/path/a.out", os.Getpid()))
}
return "", errors.New("ExecPath not implemented for " + runtime.GOOS)
}

View File

@ -1,126 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !go1.8,darwin !go1.8,freebsd openbsd
package osext
import (
"os"
"os/exec"
"path/filepath"
"runtime"
"syscall"
"unsafe"
)
var initCwd, initCwdErr = os.Getwd()
func executable() (string, error) {
var mib [4]int32
switch runtime.GOOS {
case "freebsd":
mib = [4]int32{1 /* CTL_KERN */, 14 /* KERN_PROC */, 12 /* KERN_PROC_PATHNAME */, -1}
case "darwin":
mib = [4]int32{1 /* CTL_KERN */, 38 /* KERN_PROCARGS */, int32(os.Getpid()), -1}
case "openbsd":
mib = [4]int32{1 /* CTL_KERN */, 55 /* KERN_PROC_ARGS */, int32(os.Getpid()), 1 /* KERN_PROC_ARGV */}
}
n := uintptr(0)
// Get length.
_, _, errNum := syscall.Syscall6(syscall.SYS___SYSCTL, uintptr(unsafe.Pointer(&mib[0])), 4, 0, uintptr(unsafe.Pointer(&n)), 0, 0)
if errNum != 0 {
return "", errNum
}
if n == 0 { // This shouldn't happen.
return "", nil
}
buf := make([]byte, n)
_, _, errNum = syscall.Syscall6(syscall.SYS___SYSCTL, uintptr(unsafe.Pointer(&mib[0])), 4, uintptr(unsafe.Pointer(&buf[0])), uintptr(unsafe.Pointer(&n)), 0, 0)
if errNum != 0 {
return "", errNum
}
if n == 0 { // This shouldn't happen.
return "", nil
}
var execPath string
switch runtime.GOOS {
case "openbsd":
// buf now contains **argv, with pointers to each of the C-style
// NULL terminated arguments.
var args []string
argv := uintptr(unsafe.Pointer(&buf[0]))
Loop:
for {
argp := *(**[1 << 20]byte)(unsafe.Pointer(argv))
if argp == nil {
break
}
for i := 0; uintptr(i) < n; i++ {
// we don't want the full arguments list
if string(argp[i]) == " " {
break Loop
}
if argp[i] != 0 {
continue
}
args = append(args, string(argp[:i]))
n -= uintptr(i)
break
}
if n < unsafe.Sizeof(argv) {
break
}
argv += unsafe.Sizeof(argv)
n -= unsafe.Sizeof(argv)
}
execPath = args[0]
// There is no canonical way to get an executable path on
// OpenBSD, so check PATH in case we are called directly
if execPath[0] != '/' && execPath[0] != '.' {
execIsInPath, err := exec.LookPath(execPath)
if err == nil {
execPath = execIsInPath
}
}
default:
for i, v := range buf {
if v == 0 {
buf = buf[:i]
break
}
}
execPath = string(buf)
}
var err error
// execPath will not be empty due to above checks.
// Try to get the absolute path if the execPath is not rooted.
if execPath[0] != '/' {
execPath, err = getAbs(execPath)
if err != nil {
return execPath, err
}
}
// For darwin KERN_PROCARGS may return the path to a symlink rather than the
// actual executable.
if runtime.GOOS == "darwin" {
if execPath, err = filepath.EvalSymlinks(execPath); err != nil {
return execPath, err
}
}
return execPath, nil
}
func getAbs(execPath string) (string, error) {
if initCwdErr != nil {
return execPath, initCwdErr
}
// The execPath may begin with a "../" or a "./" so clean it first.
// Join the two paths, trailing and starting slashes undetermined, so use
// the generic Join function.
return filepath.Join(initCwd, filepath.Clean(execPath)), nil
}

View File

@ -1,36 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//+build !go1.8
package osext
import (
"syscall"
"unicode/utf16"
"unsafe"
)
var (
kernel = syscall.MustLoadDLL("kernel32.dll")
getModuleFileNameProc = kernel.MustFindProc("GetModuleFileNameW")
)
// GetModuleFileName() with hModule = NULL
func executable() (exePath string, err error) {
return getModuleFileName()
}
func getModuleFileName() (string, error) {
var n uint32
b := make([]uint16, syscall.MAX_PATH)
size := uint32(len(b))
r0, _, e1 := getModuleFileNameProc.Call(0, uintptr(unsafe.Pointer(&b[0])), uintptr(size))
n = uint32(r0)
if n == 0 {
return "", e1
}
return string(utf16.Decode(b[0:n])), nil
}

View File

@ -1,24 +0,0 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof

View File

@ -1,11 +0,0 @@
language: go
go_import_path: github.com/pkg/errors
go:
- 1.4.3
- 1.5.4
- 1.6.2
- 1.7.1
- tip
script:
- go test -v ./...

23
vendor/github.com/pkg/errors/LICENSE generated vendored
View File

@ -1,23 +0,0 @@
Copyright (c) 2015, Dave Cheney <dave@cheney.net>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,52 +0,0 @@
# errors [![Travis-CI](https://travis-ci.org/pkg/errors.svg)](https://travis-ci.org/pkg/errors) [![AppVeyor](https://ci.appveyor.com/api/projects/status/b98mptawhudj53ep/branch/master?svg=true)](https://ci.appveyor.com/project/davecheney/errors/branch/master) [![GoDoc](https://godoc.org/github.com/pkg/errors?status.svg)](http://godoc.org/github.com/pkg/errors) [![Report card](https://goreportcard.com/badge/github.com/pkg/errors)](https://goreportcard.com/report/github.com/pkg/errors)
Package errors provides simple error handling primitives.
`go get github.com/pkg/errors`
The traditional error handling idiom in Go is roughly akin to
```go
if err != nil {
return err
}
```
which applied recursively up the call stack results in error reports without context or debugging information. The errors package allows programmers to add context to the failure path in their code in a way that does not destroy the original value of the error.
## Adding context to an error
The errors.Wrap function returns a new error that adds context to the original error. For example
```go
_, err := ioutil.ReadAll(r)
if err != nil {
return errors.Wrap(err, "read failed")
}
```
## Retrieving the cause of an error
Using `errors.Wrap` constructs a stack of errors, adding context to the preceding error. Depending on the nature of the error it may be necessary to reverse the operation of errors.Wrap to retrieve the original error for inspection. Any error value which implements this interface can be inspected by `errors.Cause`.
```go
type causer interface {
Cause() error
}
```
`errors.Cause` will recursively retrieve the topmost error which does not implement `causer`, which is assumed to be the original cause. For example:
```go
switch err := errors.Cause(err).(type) {
case *MyError:
// handle specifically
default:
// unknown error
}
```
[Read the package documentation for more information](https://godoc.org/github.com/pkg/errors).
## Contributing
We welcome pull requests, bug fixes and issue reports. With that said, the bar for adding new symbols to this package is intentionally set high.
Before proposing a change, please discuss your change by raising an issue.
## Licence
BSD-2-Clause

View File

@ -1,32 +0,0 @@
version: build-{build}.{branch}
clone_folder: C:\gopath\src\github.com\pkg\errors
shallow_clone: true # for startup speed
environment:
GOPATH: C:\gopath
platform:
- x64
# http://www.appveyor.com/docs/installed-software
install:
# some helpful output for debugging builds
- go version
- go env
# pre-installed MinGW at C:\MinGW is 32bit only
# but MSYS2 at C:\msys64 has mingw64
- set PATH=C:\msys64\mingw64\bin;%PATH%
- gcc --version
- g++ --version
build_script:
- go install -v ./...
test_script:
- set PATH=C:\gopath\bin;%PATH%
- go test -v ./...
#artifacts:
# - path: '%GOPATH%\bin\*.exe'
deploy: off

View File

@ -1,269 +0,0 @@
// Package errors provides simple error handling primitives.
//
// The traditional error handling idiom in Go is roughly akin to
//
// if err != nil {
// return err
// }
//
// which applied recursively up the call stack results in error reports
// without context or debugging information. The errors package allows
// programmers to add context to the failure path in their code in a way
// that does not destroy the original value of the error.
//
// Adding context to an error
//
// The errors.Wrap function returns a new error that adds context to the
// original error by recording a stack trace at the point Wrap is called,
// and the supplied message. For example
//
// _, err := ioutil.ReadAll(r)
// if err != nil {
// return errors.Wrap(err, "read failed")
// }
//
// If additional control is required the errors.WithStack and errors.WithMessage
// functions destructure errors.Wrap into its component operations of annotating
// an error with a stack trace and an a message, respectively.
//
// Retrieving the cause of an error
//
// Using errors.Wrap constructs a stack of errors, adding context to the
// preceding error. Depending on the nature of the error it may be necessary
// to reverse the operation of errors.Wrap to retrieve the original error
// for inspection. Any error value which implements this interface
//
// type causer interface {
// Cause() error
// }
//
// can be inspected by errors.Cause. errors.Cause will recursively retrieve
// the topmost error which does not implement causer, which is assumed to be
// the original cause. For example:
//
// switch err := errors.Cause(err).(type) {
// case *MyError:
// // handle specifically
// default:
// // unknown error
// }
//
// causer interface is not exported by this package, but is considered a part
// of stable public API.
//
// Formatted printing of errors
//
// All error values returned from this package implement fmt.Formatter and can
// be formatted by the fmt package. The following verbs are supported
//
// %s print the error. If the error has a Cause it will be
// printed recursively
// %v see %s
// %+v extended format. Each Frame of the error's StackTrace will
// be printed in detail.
//
// Retrieving the stack trace of an error or wrapper
//
// New, Errorf, Wrap, and Wrapf record a stack trace at the point they are
// invoked. This information can be retrieved with the following interface.
//
// type stackTracer interface {
// StackTrace() errors.StackTrace
// }
//
// Where errors.StackTrace is defined as
//
// type StackTrace []Frame
//
// The Frame type represents a call site in the stack trace. Frame supports
// the fmt.Formatter interface that can be used for printing information about
// the stack trace of this error. For example:
//
// if err, ok := err.(stackTracer); ok {
// for _, f := range err.StackTrace() {
// fmt.Printf("%+s:%d", f)
// }
// }
//
// stackTracer interface is not exported by this package, but is considered a part
// of stable public API.
//
// See the documentation for Frame.Format for more details.
package errors
import (
"fmt"
"io"
)
// New returns an error with the supplied message.
// New also records the stack trace at the point it was called.
func New(message string) error {
return &fundamental{
msg: message,
stack: callers(),
}
}
// Errorf formats according to a format specifier and returns the string
// as a value that satisfies error.
// Errorf also records the stack trace at the point it was called.
func Errorf(format string, args ...interface{}) error {
return &fundamental{
msg: fmt.Sprintf(format, args...),
stack: callers(),
}
}
// fundamental is an error that has a message and a stack, but no caller.
type fundamental struct {
msg string
*stack
}
func (f *fundamental) Error() string { return f.msg }
func (f *fundamental) Format(s fmt.State, verb rune) {
switch verb {
case 'v':
if s.Flag('+') {
io.WriteString(s, f.msg)
f.stack.Format(s, verb)
return
}
fallthrough
case 's':
io.WriteString(s, f.msg)
case 'q':
fmt.Fprintf(s, "%q", f.msg)
}
}
// WithStack annotates err with a stack trace at the point WithStack was called.
// If err is nil, WithStack returns nil.
func WithStack(err error) error {
if err == nil {
return nil
}
return &withStack{
err,
callers(),
}
}
type withStack struct {
error
*stack
}
func (w *withStack) Cause() error { return w.error }
func (w *withStack) Format(s fmt.State, verb rune) {
switch verb {
case 'v':
if s.Flag('+') {
fmt.Fprintf(s, "%+v", w.Cause())
w.stack.Format(s, verb)
return
}
fallthrough
case 's':
io.WriteString(s, w.Error())
case 'q':
fmt.Fprintf(s, "%q", w.Error())
}
}
// Wrap returns an error annotating err with a stack trace
// at the point Wrap is called, and the supplied message.
// If err is nil, Wrap returns nil.
func Wrap(err error, message string) error {
if err == nil {
return nil
}
err = &withMessage{
cause: err,
msg: message,
}
return &withStack{
err,
callers(),
}
}
// Wrapf returns an error annotating err with a stack trace
// at the point Wrapf is call, and the format specifier.
// If err is nil, Wrapf returns nil.
func Wrapf(err error, format string, args ...interface{}) error {
if err == nil {
return nil
}
err = &withMessage{
cause: err,
msg: fmt.Sprintf(format, args...),
}
return &withStack{
err,
callers(),
}
}
// WithMessage annotates err with a new message.
// If err is nil, WithMessage returns nil.
func WithMessage(err error, message string) error {
if err == nil {
return nil
}
return &withMessage{
cause: err,
msg: message,
}
}
type withMessage struct {
cause error
msg string
}
func (w *withMessage) Error() string { return w.msg + ": " + w.cause.Error() }
func (w *withMessage) Cause() error { return w.cause }
func (w *withMessage) Format(s fmt.State, verb rune) {
switch verb {
case 'v':
if s.Flag('+') {
fmt.Fprintf(s, "%+v\n", w.Cause())
io.WriteString(s, w.msg)
return
}
fallthrough
case 's', 'q':
io.WriteString(s, w.Error())
}
}
// Cause returns the underlying cause of the error, if possible.
// An error value has a cause if it implements the following
// interface:
//
// type causer interface {
// Cause() error
// }
//
// If the error does not implement Cause, the original error will
// be returned. If the error is nil, nil will be returned without further
// investigation.
func Cause(err error) error {
type causer interface {
Cause() error
}
for err != nil {
cause, ok := err.(causer)
if !ok {
break
}
err = cause.Cause()
}
return err
}

178
vendor/github.com/pkg/errors/stack.go generated vendored
View File

@ -1,178 +0,0 @@
package errors
import (
"fmt"
"io"
"path"
"runtime"
"strings"
)
// Frame represents a program counter inside a stack frame.
type Frame uintptr
// pc returns the program counter for this frame;
// multiple frames may have the same PC value.
func (f Frame) pc() uintptr { return uintptr(f) - 1 }
// file returns the full path to the file that contains the
// function for this Frame's pc.
func (f Frame) file() string {
fn := runtime.FuncForPC(f.pc())
if fn == nil {
return "unknown"
}
file, _ := fn.FileLine(f.pc())
return file
}
// line returns the line number of source code of the
// function for this Frame's pc.
func (f Frame) line() int {
fn := runtime.FuncForPC(f.pc())
if fn == nil {
return 0
}
_, line := fn.FileLine(f.pc())
return line
}
// Format formats the frame according to the fmt.Formatter interface.
//
// %s source file
// %d source line
// %n function name
// %v equivalent to %s:%d
//
// Format accepts flags that alter the printing of some verbs, as follows:
//
// %+s path of source file relative to the compile time GOPATH
// %+v equivalent to %+s:%d
func (f Frame) Format(s fmt.State, verb rune) {
switch verb {
case 's':
switch {
case s.Flag('+'):
pc := f.pc()
fn := runtime.FuncForPC(pc)
if fn == nil {
io.WriteString(s, "unknown")
} else {
file, _ := fn.FileLine(pc)
fmt.Fprintf(s, "%s\n\t%s", fn.Name(), file)
}
default:
io.WriteString(s, path.Base(f.file()))
}
case 'd':
fmt.Fprintf(s, "%d", f.line())
case 'n':
name := runtime.FuncForPC(f.pc()).Name()
io.WriteString(s, funcname(name))
case 'v':
f.Format(s, 's')
io.WriteString(s, ":")
f.Format(s, 'd')
}
}
// StackTrace is stack of Frames from innermost (newest) to outermost (oldest).
type StackTrace []Frame
func (st StackTrace) Format(s fmt.State, verb rune) {
switch verb {
case 'v':
switch {
case s.Flag('+'):
for _, f := range st {
fmt.Fprintf(s, "\n%+v", f)
}
case s.Flag('#'):
fmt.Fprintf(s, "%#v", []Frame(st))
default:
fmt.Fprintf(s, "%v", []Frame(st))
}
case 's':
fmt.Fprintf(s, "%s", []Frame(st))
}
}
// stack represents a stack of program counters.
type stack []uintptr
func (s *stack) Format(st fmt.State, verb rune) {
switch verb {
case 'v':
switch {
case st.Flag('+'):
for _, pc := range *s {
f := Frame(pc)
fmt.Fprintf(st, "\n%+v", f)
}
}
}
}
func (s *stack) StackTrace() StackTrace {
f := make([]Frame, len(*s))
for i := 0; i < len(f); i++ {
f[i] = Frame((*s)[i])
}
return f
}
func callers() *stack {
const depth = 32
var pcs [depth]uintptr
n := runtime.Callers(3, pcs[:])
var st stack = pcs[0:n]
return &st
}
// funcname removes the path prefix component of a function's name reported by func.Name().
func funcname(name string) string {
i := strings.LastIndex(name, "/")
name = name[i+1:]
i = strings.Index(name, ".")
return name[i+1:]
}
func trimGOPATH(name, file string) string {
// Here we want to get the source file path relative to the compile time
// GOPATH. As of Go 1.6.x there is no direct way to know the compiled
// GOPATH at runtime, but we can infer the number of path segments in the
// GOPATH. We note that fn.Name() returns the function name qualified by
// the import path, which does not include the GOPATH. Thus we can trim
// segments from the beginning of the file path until the number of path
// separators remaining is one more than the number of path separators in
// the function name. For example, given:
//
// GOPATH /home/user
// file /home/user/src/pkg/sub/file.go
// fn.Name() pkg/sub.Type.Method
//
// We want to produce:
//
// pkg/sub/file.go
//
// From this we can easily see that fn.Name() has one less path separator
// than our desired output. We count separators from the end of the file
// path until it finds two more than in the function name and then move
// one character forward to preserve the initial path segment without a
// leading separator.
const sep = "/"
goal := strings.Count(name, sep) + 2
i := len(file)
for n := 0; n < goal; n++ {
i = strings.LastIndex(file[:i], sep)
if i == -1 {
// not enough separators found, set i so that the slice expression
// below leaves file unmodified
i = -len(sep)
break
}
}
// get back to 0 or trim the leading separator
file = file[i+len(sep):]
return file
}

View File

@ -1,8 +0,0 @@
*.out
*.swp
*.8
*.6
_obj
_test*
markdown
tags

View File

@ -1,18 +0,0 @@
# Travis CI (http://travis-ci.org/) is a continuous integration service for
# open source projects. This file configures it to run unit tests for
# blackfriday.
language: go
go:
- 1.5
- 1.6
- 1.7
install:
- go get -d -t -v ./...
- go build -v ./...
script:
- go test -v ./...
- go test -run=^$ -bench=BenchmarkReference -benchmem

View File

@ -1,29 +0,0 @@
Blackfriday is distributed under the Simplified BSD License:
> Copyright © 2011 Russ Ross
> All rights reserved.
>
> Redistribution and use in source and binary forms, with or without
> modification, are permitted provided that the following conditions
> are met:
>
> 1. Redistributions of source code must retain the above copyright
> notice, this list of conditions and the following disclaimer.
>
> 2. Redistributions in binary form must reproduce the above
> copyright notice, this list of conditions and the following
> disclaimer in the documentation and/or other materials provided with
> the distribution.
>
> THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
> BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
> CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
> ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
> POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,283 +0,0 @@
Blackfriday [![Build Status](https://travis-ci.org/russross/blackfriday.svg?branch=master)](https://travis-ci.org/russross/blackfriday)
===========
Blackfriday is a [Markdown][1] processor implemented in [Go][2]. It
is paranoid about its input (so you can safely feed it user-supplied
data), it is fast, it supports common extensions (tables, smart
punctuation substitutions, etc.), and it is safe for all utf-8
(unicode) input.
HTML output is currently supported, along with Smartypants
extensions.
It started as a translation from C of [Sundown][3].
Installation
------------
Blackfriday is compatible with any modern Go release. With Go 1.7 and git
installed:
go get gopkg.in/russross/blackfriday.v2
will download, compile, and install the package into your `$GOPATH`
directory hierarchy. Alternatively, you can achieve the same if you
import it into a project:
import "gopkg.in/russross/blackfriday.v2"
and `go get` without parameters.
Versions
--------
Currently maintained and recommended version of Blackfriday is `v2`. It's being
developed on its own branch: https://github.com/russross/blackfriday/v2. You
should install and import it via [gopkg.in][6] at
`gopkg.in/russross/blackfriday.v2`.
Version 2 offers a number of improvements over v1:
* Cleaned up API
* A separate call to [`Parse`][4], which produces an abstract syntax tree for
the document
* Latest bug fixes
* Flexibility to easily add your own rendering extensions
Potential drawbacks:
* Our benchmarks show v2 to be slightly slower than v1. Currently in the
ballpark of around 15%.
* API breakage. If you can't afford modifying your code to adhere to the new API
and don't care too much about the new features, v2 is probably not for you.
* Several bug fixes are trailing behind and still need to be forward-ported to
v2. See issue [#348](https://github.com/russross/blackfriday/issues/348) for
tracking.
Usage
-----
For the most sensible markdown processing, it is as simple as getting your input
into a byte slice and calling:
```go
output := blackfriday.Run(input)
```
Your input will be parsed and the output rendered with a set of most popular
extensions enabled. If you want the most basic feature set, corresponding with
the bare Markdown specification, use:
```go
output := blackfriday.Run(input, blackfriday.WithNoExtensions())
```
### Sanitize untrusted content
Blackfriday itself does nothing to protect against malicious content. If you are
dealing with user-supplied markdown, we recommend running Blackfriday's output
through HTML sanitizer such as [Bluemonday][5].
Here's an example of simple usage of Blackfriday together with Bluemonday:
```go
import (
"github.com/microcosm-cc/bluemonday"
"github.com/russross/blackfriday"
)
// ...
unsafe := blackfriday.Run(input)
html := bluemonday.UGCPolicy().SanitizeBytes(unsafe)
```
### Custom options
If you want to customize the set of options, use `blackfriday.WithExtensions`,
`blackfriday.WithRenderer` and `blackfriday.WithRefOverride`.
You can also check out `blackfriday-tool` for a more complete example
of how to use it. Download and install it using:
go get github.com/russross/blackfriday-tool
This is a simple command-line tool that allows you to process a
markdown file using a standalone program. You can also browse the
source directly on github if you are just looking for some example
code:
* <http://github.com/russross/blackfriday-tool>
Note that if you have not already done so, installing
`blackfriday-tool` will be sufficient to download and install
blackfriday in addition to the tool itself. The tool binary will be
installed in `$GOPATH/bin`. This is a statically-linked binary that
can be copied to wherever you need it without worrying about
dependencies and library versions.
Features
--------
All features of Sundown are supported, including:
* **Compatibility**. The Markdown v1.0.3 test suite passes with
the `--tidy` option. Without `--tidy`, the differences are
mostly in whitespace and entity escaping, where blackfriday is
more consistent and cleaner.
* **Common extensions**, including table support, fenced code
blocks, autolinks, strikethroughs, non-strict emphasis, etc.
* **Safety**. Blackfriday is paranoid when parsing, making it safe
to feed untrusted user input without fear of bad things
happening. The test suite stress tests this and there are no
known inputs that make it crash. If you find one, please let me
know and send me the input that does it.
NOTE: "safety" in this context means *runtime safety only*. In order to
protect yourself against JavaScript injection in untrusted content, see
[this example](https://github.com/russross/blackfriday#sanitize-untrusted-content).
* **Fast processing**. It is fast enough to render on-demand in
most web applications without having to cache the output.
* **Thread safety**. You can run multiple parsers in different
goroutines without ill effect. There is no dependence on global
shared state.
* **Minimal dependencies**. Blackfriday only depends on standard
library packages in Go. The source code is pretty
self-contained, so it is easy to add to any project, including
Google App Engine projects.
* **Standards compliant**. Output successfully validates using the
W3C validation tool for HTML 4.01 and XHTML 1.0 Transitional.
Extensions
----------
In addition to the standard markdown syntax, this package
implements the following extensions:
* **Intra-word emphasis supression**. The `_` character is
commonly used inside words when discussing code, so having
markdown interpret it as an emphasis command is usually the
wrong thing. Blackfriday lets you treat all emphasis markers as
normal characters when they occur inside a word.
* **Tables**. Tables can be created by drawing them in the input
using a simple syntax:
```
Name | Age
--------|------
Bob | 27
Alice | 23
```
* **Fenced code blocks**. In addition to the normal 4-space
indentation to mark code blocks, you can explicitly mark them
and supply a language (to make syntax highlighting simple). Just
mark it like this:
```go
func getTrue() bool {
return true
}
```
You can use 3 or more backticks to mark the beginning of the
block, and the same number to mark the end of the block.
* **Definition lists**. A simple definition list is made of a single-line
term followed by a colon and the definition for that term.
Cat
: Fluffy animal everyone likes
Internet
: Vector of transmission for pictures of cats
Terms must be separated from the previous definition by a blank line.
* **Footnotes**. A marker in the text that will become a superscript number;
a footnote definition that will be placed in a list of footnotes at the
end of the document. A footnote looks like this:
This is a footnote.[^1]
[^1]: the footnote text.
* **Autolinking**. Blackfriday can find URLs that have not been
explicitly marked as links and turn them into links.
* **Strikethrough**. Use two tildes (`~~`) to mark text that
should be crossed out.
* **Hard line breaks**. With this extension enabled newlines in the input
translate into line breaks in the output. This extension is off by default.
* **Smart quotes**. Smartypants-style punctuation substitution is
supported, turning normal double- and single-quote marks into
curly quotes, etc.
* **LaTeX-style dash parsing** is an additional option, where `--`
is translated into `&ndash;`, and `---` is translated into
`&mdash;`. This differs from most smartypants processors, which
turn a single hyphen into an ndash and a double hyphen into an
mdash.
* **Smart fractions**, where anything that looks like a fraction
is translated into suitable HTML (instead of just a few special
cases like most smartypant processors). For example, `4/5`
becomes `<sup>4</sup>&frasl;<sub>5</sub>`, which renders as
<sup>4</sup>&frasl;<sub>5</sub>.
Other renderers
---------------
Blackfriday is structured to allow alternative rendering engines. Here
are a few of note:
* [github_flavored_markdown](https://godoc.org/github.com/shurcooL/github_flavored_markdown):
provides a GitHub Flavored Markdown renderer with fenced code block
highlighting, clickable heading anchor links.
It's not customizable, and its goal is to produce HTML output
equivalent to the [GitHub Markdown API endpoint](https://developer.github.com/v3/markdown/#render-a-markdown-document-in-raw-mode),
except the rendering is performed locally.
* [markdownfmt](https://github.com/shurcooL/markdownfmt): like gofmt,
but for markdown.
* [LaTeX output](https://bitbucket.org/ambrevar/blackfriday-latex):
renders output as LaTeX.
Todo
----
* More unit testing
* Improve unicode support. It does not understand all unicode
rules (about what constitutes a letter, a punctuation symbol,
etc.), so it may fail to detect word boundaries correctly in
some instances. It is safe on all utf-8 input.
License
-------
[Blackfriday is distributed under the Simplified BSD License](LICENSE.txt)
[1]: https://daringfireball.net/projects/markdown/ "Markdown"
[2]: https://golang.org/ "Go Language"
[3]: https://github.com/vmg/sundown "Sundown"
[4]: https://godoc.org/gopkg.in/russross/blackfriday.v2#Parse "Parse func"
[5]: https://github.com/microcosm-cc/bluemonday "Bluemonday"
[6]: https://labix.org/gopkg.in "gopkg.in"

File diff suppressed because it is too large Load Diff

View File

@ -1,18 +0,0 @@
// Package blackfriday is a markdown processor.
//
// It translates plain text with simple formatting rules into an AST, which can
// then be further processed to HTML (provided by Blackfriday itself) or other
// formats (provided by the community).
//
// The simplest way to invoke Blackfriday is to call the Run function. It will
// take a text input and produce a text output in HTML (or other format).
//
// A slightly more sophisticated way to use Blackfriday is to create a Markdown
// processor and to call Parse, which returns a syntax tree for the input
// document. You can leverage Blackfriday's parsing for content extraction from
// markdown documents. You can assign a custom renderer and set various options
// to the Markdown processor.
//
// If you're interested in calling Blackfriday from command line, see
// https://github.com/russross/blackfriday-tool.
package blackfriday

View File

@ -1,34 +0,0 @@
package blackfriday
import (
"html"
"io"
)
var htmlEscaper = [256][]byte{
'&': []byte("&amp;"),
'<': []byte("&lt;"),
'>': []byte("&gt;"),
'"': []byte("&quot;"),
}
func escapeHTML(w io.Writer, s []byte) {
var start, end int
for end < len(s) {
escSeq := htmlEscaper[s[end]]
if escSeq != nil {
w.Write(s[start:end])
w.Write(escSeq)
start = end + 1
}
end++
}
if start < len(s) && end <= len(s) {
w.Write(s[start:end])
}
}
func escLink(w io.Writer, text []byte) {
unesc := html.UnescapeString(string(text))
escapeHTML(w, []byte(unesc))
}

View File

@ -1,940 +0,0 @@
//
// Blackfriday Markdown Processor
// Available at http://github.com/russross/blackfriday
//
// Copyright © 2011 Russ Ross <russ@russross.com>.
// Distributed under the Simplified BSD License.
// See README.md for details.
//
//
//
// HTML rendering backend
//
//
package blackfriday
import (
"bytes"
"fmt"
"io"
"regexp"
"strings"
)
// HTMLFlags control optional behavior of HTML renderer.
type HTMLFlags int
// HTML renderer configuration options.
const (
HTMLFlagsNone HTMLFlags = 0
SkipHTML HTMLFlags = 1 << iota // Skip preformatted HTML blocks
SkipImages // Skip embedded images
SkipLinks // Skip all links
Safelink // Only link to trusted protocols
NofollowLinks // Only link with rel="nofollow"
NoreferrerLinks // Only link with rel="noreferrer"
HrefTargetBlank // Add a blank target
CompletePage // Generate a complete HTML page
UseXHTML // Generate XHTML output instead of HTML
FootnoteReturnLinks // Generate a link at the end of a footnote to return to the source
Smartypants // Enable smart punctuation substitutions
SmartypantsFractions // Enable smart fractions (with Smartypants)
SmartypantsDashes // Enable smart dashes (with Smartypants)
SmartypantsLatexDashes // Enable LaTeX-style dashes (with Smartypants)
SmartypantsAngledQuotes // Enable angled double quotes (with Smartypants) for double quotes rendering
SmartypantsQuotesNBSP // Enable « French guillemets » (with Smartypants)
TOC // Generate a table of contents
)
var (
htmlTagRe = regexp.MustCompile("(?i)^" + htmlTag)
)
const (
htmlTag = "(?:" + openTag + "|" + closeTag + "|" + htmlComment + "|" +
processingInstruction + "|" + declaration + "|" + cdata + ")"
closeTag = "</" + tagName + "\\s*[>]"
openTag = "<" + tagName + attribute + "*" + "\\s*/?>"
attribute = "(?:" + "\\s+" + attributeName + attributeValueSpec + "?)"
attributeValue = "(?:" + unquotedValue + "|" + singleQuotedValue + "|" + doubleQuotedValue + ")"
attributeValueSpec = "(?:" + "\\s*=" + "\\s*" + attributeValue + ")"
attributeName = "[a-zA-Z_:][a-zA-Z0-9:._-]*"
cdata = "<!\\[CDATA\\[[\\s\\S]*?\\]\\]>"
declaration = "<![A-Z]+" + "\\s+[^>]*>"
doubleQuotedValue = "\"[^\"]*\""
htmlComment = "<!---->|<!--(?:-?[^>-])(?:-?[^-])*-->"
processingInstruction = "[<][?].*?[?][>]"
singleQuotedValue = "'[^']*'"
tagName = "[A-Za-z][A-Za-z0-9-]*"
unquotedValue = "[^\"'=<>`\\x00-\\x20]+"
)
// HTMLRendererParameters is a collection of supplementary parameters tweaking
// the behavior of various parts of HTML renderer.
type HTMLRendererParameters struct {
// Prepend this text to each relative URL.
AbsolutePrefix string
// Add this text to each footnote anchor, to ensure uniqueness.
FootnoteAnchorPrefix string
// Show this text inside the <a> tag for a footnote return link, if the
// HTML_FOOTNOTE_RETURN_LINKS flag is enabled. If blank, the string
// <sup>[return]</sup> is used.
FootnoteReturnLinkContents string
// If set, add this text to the front of each Heading ID, to ensure
// uniqueness.
HeadingIDPrefix string
// If set, add this text to the back of each Heading ID, to ensure uniqueness.
HeadingIDSuffix string
Title string // Document title (used if CompletePage is set)
CSS string // Optional CSS file URL (used if CompletePage is set)
Icon string // Optional icon file URL (used if CompletePage is set)
Flags HTMLFlags // Flags allow customizing this renderer's behavior
}
// HTMLRenderer is a type that implements the Renderer interface for HTML output.
//
// Do not create this directly, instead use the NewHTMLRenderer function.
type HTMLRenderer struct {
HTMLRendererParameters
closeTag string // how to end singleton tags: either " />" or ">"
// Track heading IDs to prevent ID collision in a single generation.
headingIDs map[string]int
lastOutputLen int
disableTags int
sr *SPRenderer
}
const (
xhtmlClose = " />"
htmlClose = ">"
)
// NewHTMLRenderer creates and configures an HTMLRenderer object, which
// satisfies the Renderer interface.
func NewHTMLRenderer(params HTMLRendererParameters) *HTMLRenderer {
// configure the rendering engine
closeTag := htmlClose
if params.Flags&UseXHTML != 0 {
closeTag = xhtmlClose
}
if params.FootnoteReturnLinkContents == "" {
params.FootnoteReturnLinkContents = `<sup>[return]</sup>`
}
return &HTMLRenderer{
HTMLRendererParameters: params,
closeTag: closeTag,
headingIDs: make(map[string]int),
sr: NewSmartypantsRenderer(params.Flags),
}
}
func isHTMLTag(tag []byte, tagname string) bool {
found, _ := findHTMLTagPos(tag, tagname)
return found
}
// Look for a character, but ignore it when it's in any kind of quotes, it
// might be JavaScript
func skipUntilCharIgnoreQuotes(html []byte, start int, char byte) int {
inSingleQuote := false
inDoubleQuote := false
inGraveQuote := false
i := start
for i < len(html) {
switch {
case html[i] == char && !inSingleQuote && !inDoubleQuote && !inGraveQuote:
return i
case html[i] == '\'':
inSingleQuote = !inSingleQuote
case html[i] == '"':
inDoubleQuote = !inDoubleQuote
case html[i] == '`':
inGraveQuote = !inGraveQuote
}
i++
}
return start
}
func findHTMLTagPos(tag []byte, tagname string) (bool, int) {
i := 0
if i < len(tag) && tag[0] != '<' {
return false, -1
}
i++
i = skipSpace(tag, i)
if i < len(tag) && tag[i] == '/' {
i++
}
i = skipSpace(tag, i)
j := 0
for ; i < len(tag); i, j = i+1, j+1 {
if j >= len(tagname) {
break
}
if strings.ToLower(string(tag[i]))[0] != tagname[j] {
return false, -1
}
}
if i == len(tag) {
return false, -1
}
rightAngle := skipUntilCharIgnoreQuotes(tag, i, '>')
if rightAngle >= i {
return true, rightAngle
}
return false, -1
}
func skipSpace(tag []byte, i int) int {
for i < len(tag) && isspace(tag[i]) {
i++
}
return i
}
func isRelativeLink(link []byte) (yes bool) {
// a tag begin with '#'
if link[0] == '#' {
return true
}
// link begin with '/' but not '//', the second maybe a protocol relative link
if len(link) >= 2 && link[0] == '/' && link[1] != '/' {
return true
}
// only the root '/'
if len(link) == 1 && link[0] == '/' {
return true
}
// current directory : begin with "./"
if bytes.HasPrefix(link, []byte("./")) {
return true
}
// parent directory : begin with "../"
if bytes.HasPrefix(link, []byte("../")) {
return true
}
return false
}
func (r *HTMLRenderer) ensureUniqueHeadingID(id string) string {
for count, found := r.headingIDs[id]; found; count, found = r.headingIDs[id] {
tmp := fmt.Sprintf("%s-%d", id, count+1)
if _, tmpFound := r.headingIDs[tmp]; !tmpFound {
r.headingIDs[id] = count + 1
id = tmp
} else {
id = id + "-1"
}
}
if _, found := r.headingIDs[id]; !found {
r.headingIDs[id] = 0
}
return id
}
func (r *HTMLRenderer) addAbsPrefix(link []byte) []byte {
if r.AbsolutePrefix != "" && isRelativeLink(link) && link[0] != '.' {
newDest := r.AbsolutePrefix
if link[0] != '/' {
newDest += "/"
}
newDest += string(link)
return []byte(newDest)
}
return link
}
func appendLinkAttrs(attrs []string, flags HTMLFlags, link []byte) []string {
if isRelativeLink(link) {
return attrs
}
val := []string{}
if flags&NofollowLinks != 0 {
val = append(val, "nofollow")
}
if flags&NoreferrerLinks != 0 {
val = append(val, "noreferrer")
}
if flags&HrefTargetBlank != 0 {
attrs = append(attrs, "target=\"_blank\"")
}
if len(val) == 0 {
return attrs
}
attr := fmt.Sprintf("rel=%q", strings.Join(val, " "))
return append(attrs, attr)
}
func isMailto(link []byte) bool {
return bytes.HasPrefix(link, []byte("mailto:"))
}
func needSkipLink(flags HTMLFlags, dest []byte) bool {
if flags&SkipLinks != 0 {
return true
}
return flags&Safelink != 0 && !isSafeLink(dest) && !isMailto(dest)
}
func isSmartypantable(node *Node) bool {
pt := node.Parent.Type
return pt != Link && pt != CodeBlock && pt != Code
}
func appendLanguageAttr(attrs []string, info []byte) []string {
if len(info) == 0 {
return attrs
}
endOfLang := bytes.IndexAny(info, "\t ")
if endOfLang < 0 {
endOfLang = len(info)
}
return append(attrs, fmt.Sprintf("class=\"language-%s\"", info[:endOfLang]))
}
func (r *HTMLRenderer) tag(w io.Writer, name []byte, attrs []string) {
w.Write(name)
if len(attrs) > 0 {
w.Write(spaceBytes)
w.Write([]byte(strings.Join(attrs, " ")))
}
w.Write(gtBytes)
r.lastOutputLen = 1
}
func footnoteRef(prefix string, node *Node) []byte {
urlFrag := prefix + string(slugify(node.Destination))
anchor := fmt.Sprintf(`<a rel="footnote" href="#fn:%s">%d</a>`, urlFrag, node.NoteID)
return []byte(fmt.Sprintf(`<sup class="footnote-ref" id="fnref:%s">%s</sup>`, urlFrag, anchor))
}
func footnoteItem(prefix string, slug []byte) []byte {
return []byte(fmt.Sprintf(`<li id="fn:%s%s">`, prefix, slug))
}
func footnoteReturnLink(prefix, returnLink string, slug []byte) []byte {
const format = ` <a class="footnote-return" href="#fnref:%s%s">%s</a>`
return []byte(fmt.Sprintf(format, prefix, slug, returnLink))
}
func itemOpenCR(node *Node) bool {
if node.Prev == nil {
return false
}
ld := node.Parent.ListData
return !ld.Tight && ld.ListFlags&ListTypeDefinition == 0
}
func skipParagraphTags(node *Node) bool {
grandparent := node.Parent.Parent
if grandparent == nil || grandparent.Type != List {
return false
}
tightOrTerm := grandparent.Tight || node.Parent.ListFlags&ListTypeTerm != 0
return grandparent.Type == List && tightOrTerm
}
func cellAlignment(align CellAlignFlags) string {
switch align {
case TableAlignmentLeft:
return "left"
case TableAlignmentRight:
return "right"
case TableAlignmentCenter:
return "center"
default:
return ""
}
}
func (r *HTMLRenderer) out(w io.Writer, text []byte) {
if r.disableTags > 0 {
w.Write(htmlTagRe.ReplaceAll(text, []byte{}))
} else {
w.Write(text)
}
r.lastOutputLen = len(text)
}
func (r *HTMLRenderer) cr(w io.Writer) {
if r.lastOutputLen > 0 {
r.out(w, nlBytes)
}
}
var (
nlBytes = []byte{'\n'}
gtBytes = []byte{'>'}
spaceBytes = []byte{' '}
)
var (
brTag = []byte("<br>")
brXHTMLTag = []byte("<br />")
emTag = []byte("<em>")
emCloseTag = []byte("</em>")
strongTag = []byte("<strong>")
strongCloseTag = []byte("</strong>")
delTag = []byte("<del>")
delCloseTag = []byte("</del>")
ttTag = []byte("<tt>")
ttCloseTag = []byte("</tt>")
aTag = []byte("<a")
aCloseTag = []byte("</a>")
preTag = []byte("<pre>")
preCloseTag = []byte("</pre>")
codeTag = []byte("<code>")
codeCloseTag = []byte("</code>")
pTag = []byte("<p>")
pCloseTag = []byte("</p>")
blockquoteTag = []byte("<blockquote>")
blockquoteCloseTag = []byte("</blockquote>")
hrTag = []byte("<hr>")
hrXHTMLTag = []byte("<hr />")
ulTag = []byte("<ul>")
ulCloseTag = []byte("</ul>")
olTag = []byte("<ol>")
olCloseTag = []byte("</ol>")
dlTag = []byte("<dl>")
dlCloseTag = []byte("</dl>")
liTag = []byte("<li>")
liCloseTag = []byte("</li>")
ddTag = []byte("<dd>")
ddCloseTag = []byte("</dd>")
dtTag = []byte("<dt>")
dtCloseTag = []byte("</dt>")
tableTag = []byte("<table>")
tableCloseTag = []byte("</table>")
tdTag = []byte("<td")
tdCloseTag = []byte("</td>")
thTag = []byte("<th")
thCloseTag = []byte("</th>")
theadTag = []byte("<thead>")
theadCloseTag = []byte("</thead>")
tbodyTag = []byte("<tbody>")
tbodyCloseTag = []byte("</tbody>")
trTag = []byte("<tr>")
trCloseTag = []byte("</tr>")
h1Tag = []byte("<h1")
h1CloseTag = []byte("</h1>")
h2Tag = []byte("<h2")
h2CloseTag = []byte("</h2>")
h3Tag = []byte("<h3")
h3CloseTag = []byte("</h3>")
h4Tag = []byte("<h4")
h4CloseTag = []byte("</h4>")
h5Tag = []byte("<h5")
h5CloseTag = []byte("</h5>")
h6Tag = []byte("<h6")
h6CloseTag = []byte("</h6>")
footnotesDivBytes = []byte("\n<div class=\"footnotes\">\n\n")
footnotesCloseDivBytes = []byte("\n</div>\n")
)
func headingTagsFromLevel(level int) ([]byte, []byte) {
switch level {
case 1:
return h1Tag, h1CloseTag
case 2:
return h2Tag, h2CloseTag
case 3:
return h3Tag, h3CloseTag
case 4:
return h4Tag, h4CloseTag
case 5:
return h5Tag, h5CloseTag
default:
return h6Tag, h6CloseTag
}
}
func (r *HTMLRenderer) outHRTag(w io.Writer) {
if r.Flags&UseXHTML == 0 {
r.out(w, hrTag)
} else {
r.out(w, hrXHTMLTag)
}
}
// RenderNode is a default renderer of a single node of a syntax tree. For
// block nodes it will be called twice: first time with entering=true, second
// time with entering=false, so that it could know when it's working on an open
// tag and when on close. It writes the result to w.
//
// The return value is a way to tell the calling walker to adjust its walk
// pattern: e.g. it can terminate the traversal by returning Terminate. Or it
// can ask the walker to skip a subtree of this node by returning SkipChildren.
// The typical behavior is to return GoToNext, which asks for the usual
// traversal to the next node.
func (r *HTMLRenderer) RenderNode(w io.Writer, node *Node, entering bool) WalkStatus {
attrs := []string{}
switch node.Type {
case Text:
if r.Flags&Smartypants != 0 {
var tmp bytes.Buffer
escapeHTML(&tmp, node.Literal)
r.sr.Process(w, tmp.Bytes())
} else {
if node.Parent.Type == Link {
escLink(w, node.Literal)
} else {
escapeHTML(w, node.Literal)
}
}
case Softbreak:
r.cr(w)
// TODO: make it configurable via out(renderer.softbreak)
case Hardbreak:
if r.Flags&UseXHTML == 0 {
r.out(w, brTag)
} else {
r.out(w, brXHTMLTag)
}
r.cr(w)
case Emph:
if entering {
r.out(w, emTag)
} else {
r.out(w, emCloseTag)
}
case Strong:
if entering {
r.out(w, strongTag)
} else {
r.out(w, strongCloseTag)
}
case Del:
if entering {
r.out(w, delTag)
} else {
r.out(w, delCloseTag)
}
case HTMLSpan:
if r.Flags&SkipHTML != 0 {
break
}
r.out(w, node.Literal)
case Link:
// mark it but don't link it if it is not a safe link: no smartypants
dest := node.LinkData.Destination
if needSkipLink(r.Flags, dest) {
if entering {
r.out(w, ttTag)
} else {
r.out(w, ttCloseTag)
}
} else {
if entering {
dest = r.addAbsPrefix(dest)
var hrefBuf bytes.Buffer
hrefBuf.WriteString("href=\"")
escLink(&hrefBuf, dest)
hrefBuf.WriteByte('"')
attrs = append(attrs, hrefBuf.String())
if node.NoteID != 0 {
r.out(w, footnoteRef(r.FootnoteAnchorPrefix, node))
break
}
attrs = appendLinkAttrs(attrs, r.Flags, dest)
if len(node.LinkData.Title) > 0 {
var titleBuff bytes.Buffer
titleBuff.WriteString("title=\"")
escapeHTML(&titleBuff, node.LinkData.Title)
titleBuff.WriteByte('"')
attrs = append(attrs, titleBuff.String())
}
r.tag(w, aTag, attrs)
} else {
if node.NoteID != 0 {
break
}
r.out(w, aCloseTag)
}
}
case Image:
if r.Flags&SkipImages != 0 {
return SkipChildren
}
if entering {
dest := node.LinkData.Destination
dest = r.addAbsPrefix(dest)
if r.disableTags == 0 {
//if options.safe && potentiallyUnsafe(dest) {
//out(w, `<img src="" alt="`)
//} else {
r.out(w, []byte(`<img src="`))
escLink(w, dest)
r.out(w, []byte(`" alt="`))
//}
}
r.disableTags++
} else {
r.disableTags--
if r.disableTags == 0 {
if node.LinkData.Title != nil {
r.out(w, []byte(`" title="`))
escapeHTML(w, node.LinkData.Title)
}
r.out(w, []byte(`" />`))
}
}
case Code:
r.out(w, codeTag)
escapeHTML(w, node.Literal)
r.out(w, codeCloseTag)
case Document:
break
case Paragraph:
if skipParagraphTags(node) {
break
}
if entering {
// TODO: untangle this clusterfuck about when the newlines need
// to be added and when not.
if node.Prev != nil {
switch node.Prev.Type {
case HTMLBlock, List, Paragraph, Heading, CodeBlock, BlockQuote, HorizontalRule:
r.cr(w)
}
}
if node.Parent.Type == BlockQuote && node.Prev == nil {
r.cr(w)
}
r.out(w, pTag)
} else {
r.out(w, pCloseTag)
if !(node.Parent.Type == Item && node.Next == nil) {
r.cr(w)
}
}
case BlockQuote:
if entering {
r.cr(w)
r.out(w, blockquoteTag)
} else {
r.out(w, blockquoteCloseTag)
r.cr(w)
}
case HTMLBlock:
if r.Flags&SkipHTML != 0 {
break
}
r.cr(w)
r.out(w, node.Literal)
r.cr(w)
case Heading:
openTag, closeTag := headingTagsFromLevel(node.Level)
if entering {
if node.IsTitleblock {
attrs = append(attrs, `class="title"`)
}
if node.HeadingID != "" {
id := r.ensureUniqueHeadingID(node.HeadingID)
if r.HeadingIDPrefix != "" {
id = r.HeadingIDPrefix + id
}
if r.HeadingIDSuffix != "" {
id = id + r.HeadingIDSuffix
}
attrs = append(attrs, fmt.Sprintf(`id="%s"`, id))
}
r.cr(w)
r.tag(w, openTag, attrs)
} else {
r.out(w, closeTag)
if !(node.Parent.Type == Item && node.Next == nil) {
r.cr(w)
}
}
case HorizontalRule:
r.cr(w)
r.outHRTag(w)
r.cr(w)
case List:
openTag := ulTag
closeTag := ulCloseTag
if node.ListFlags&ListTypeOrdered != 0 {
openTag = olTag
closeTag = olCloseTag
}
if node.ListFlags&ListTypeDefinition != 0 {
openTag = dlTag
closeTag = dlCloseTag
}
if entering {
if node.IsFootnotesList {
r.out(w, footnotesDivBytes)
r.outHRTag(w)
r.cr(w)
}
r.cr(w)
if node.Parent.Type == Item && node.Parent.Parent.Tight {
r.cr(w)
}
r.tag(w, openTag[:len(openTag)-1], attrs)
r.cr(w)
} else {
r.out(w, closeTag)
//cr(w)
//if node.parent.Type != Item {
// cr(w)
//}
if node.Parent.Type == Item && node.Next != nil {
r.cr(w)
}
if node.Parent.Type == Document || node.Parent.Type == BlockQuote {
r.cr(w)
}
if node.IsFootnotesList {
r.out(w, footnotesCloseDivBytes)
}
}
case Item:
openTag := liTag
closeTag := liCloseTag
if node.ListFlags&ListTypeDefinition != 0 {
openTag = ddTag
closeTag = ddCloseTag
}
if node.ListFlags&ListTypeTerm != 0 {
openTag = dtTag
closeTag = dtCloseTag
}
if entering {
if itemOpenCR(node) {
r.cr(w)
}
if node.ListData.RefLink != nil {
slug := slugify(node.ListData.RefLink)
r.out(w, footnoteItem(r.FootnoteAnchorPrefix, slug))
break
}
r.out(w, openTag)
} else {
if node.ListData.RefLink != nil {
slug := slugify(node.ListData.RefLink)
if r.Flags&FootnoteReturnLinks != 0 {
r.out(w, footnoteReturnLink(r.FootnoteAnchorPrefix, r.FootnoteReturnLinkContents, slug))
}
}
r.out(w, closeTag)
r.cr(w)
}
case CodeBlock:
attrs = appendLanguageAttr(attrs, node.Info)
r.cr(w)
r.out(w, preTag)
r.tag(w, codeTag[:len(codeTag)-1], attrs)
escapeHTML(w, node.Literal)
r.out(w, codeCloseTag)
r.out(w, preCloseTag)
if node.Parent.Type != Item {
r.cr(w)
}
case Table:
if entering {
r.cr(w)
r.out(w, tableTag)
} else {
r.out(w, tableCloseTag)
r.cr(w)
}
case TableCell:
openTag := tdTag
closeTag := tdCloseTag
if node.IsHeader {
openTag = thTag
closeTag = thCloseTag
}
if entering {
align := cellAlignment(node.Align)
if align != "" {
attrs = append(attrs, fmt.Sprintf(`align="%s"`, align))
}
if node.Prev == nil {
r.cr(w)
}
r.tag(w, openTag, attrs)
} else {
r.out(w, closeTag)
r.cr(w)
}
case TableHead:
if entering {
r.cr(w)
r.out(w, theadTag)
} else {
r.out(w, theadCloseTag)
r.cr(w)
}
case TableBody:
if entering {
r.cr(w)
r.out(w, tbodyTag)
// XXX: this is to adhere to a rather silly test. Should fix test.
if node.FirstChild == nil {
r.cr(w)
}
} else {
r.out(w, tbodyCloseTag)
r.cr(w)
}
case TableRow:
if entering {
r.cr(w)
r.out(w, trTag)
} else {
r.out(w, trCloseTag)
r.cr(w)
}
default:
panic("Unknown node type " + node.Type.String())
}
return GoToNext
}
// RenderHeader writes HTML document preamble and TOC if requested.
func (r *HTMLRenderer) RenderHeader(w io.Writer, ast *Node) {
r.writeDocumentHeader(w)
if r.Flags&TOC != 0 {
r.writeTOC(w, ast)
}
}
// RenderFooter writes HTML document footer.
func (r *HTMLRenderer) RenderFooter(w io.Writer, ast *Node) {
if r.Flags&CompletePage == 0 {
return
}
io.WriteString(w, "\n</body>\n</html>\n")
}
func (r *HTMLRenderer) writeDocumentHeader(w io.Writer) {
if r.Flags&CompletePage == 0 {
return
}
ending := ""
if r.Flags&UseXHTML != 0 {
io.WriteString(w, "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" ")
io.WriteString(w, "\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n")
io.WriteString(w, "<html xmlns=\"http://www.w3.org/1999/xhtml\">\n")
ending = " /"
} else {
io.WriteString(w, "<!DOCTYPE html>\n")
io.WriteString(w, "<html>\n")
}
io.WriteString(w, "<head>\n")
io.WriteString(w, " <title>")
if r.Flags&Smartypants != 0 {
r.sr.Process(w, []byte(r.Title))
} else {
escapeHTML(w, []byte(r.Title))
}
io.WriteString(w, "</title>\n")
io.WriteString(w, " <meta name=\"GENERATOR\" content=\"Blackfriday Markdown Processor v")
io.WriteString(w, Version)
io.WriteString(w, "\"")
io.WriteString(w, ending)
io.WriteString(w, ">\n")
io.WriteString(w, " <meta charset=\"utf-8\"")
io.WriteString(w, ending)
io.WriteString(w, ">\n")
if r.CSS != "" {
io.WriteString(w, " <link rel=\"stylesheet\" type=\"text/css\" href=\"")
escapeHTML(w, []byte(r.CSS))
io.WriteString(w, "\"")
io.WriteString(w, ending)
io.WriteString(w, ">\n")
}
if r.Icon != "" {
io.WriteString(w, " <link rel=\"icon\" type=\"image/x-icon\" href=\"")
escapeHTML(w, []byte(r.Icon))
io.WriteString(w, "\"")
io.WriteString(w, ending)
io.WriteString(w, ">\n")
}
io.WriteString(w, "</head>\n")
io.WriteString(w, "<body>\n\n")
}
func (r *HTMLRenderer) writeTOC(w io.Writer, ast *Node) {
buf := bytes.Buffer{}
inHeading := false
tocLevel := 0
headingCount := 0
ast.Walk(func(node *Node, entering bool) WalkStatus {
if node.Type == Heading && !node.HeadingData.IsTitleblock {
inHeading = entering
if entering {
node.HeadingID = fmt.Sprintf("toc_%d", headingCount)
if node.Level == tocLevel {
buf.WriteString("</li>\n\n<li>")
} else if node.Level < tocLevel {
for node.Level < tocLevel {
tocLevel--
buf.WriteString("</li>\n</ul>")
}
buf.WriteString("</li>\n\n<li>")
} else {
for node.Level > tocLevel {
tocLevel++
buf.WriteString("\n<ul>\n<li>")
}
}
fmt.Fprintf(&buf, `<a href="#toc_%d">`, headingCount)
headingCount++
} else {
buf.WriteString("</a>")
}
return GoToNext
}
if inHeading {
return r.RenderNode(&buf, node, entering)
}
return GoToNext
})
for ; tocLevel > 0; tocLevel-- {
buf.WriteString("</li>\n</ul>")
}
if buf.Len() > 0 {
io.WriteString(w, "<nav>\n")
w.Write(buf.Bytes())
io.WriteString(w, "\n\n</nav>\n")
}
r.lastOutputLen = buf.Len()
}

File diff suppressed because it is too large Load Diff

View File

@ -1,940 +0,0 @@
// Blackfriday Markdown Processor
// Available at http://github.com/russross/blackfriday
//
// Copyright © 2011 Russ Ross <russ@russross.com>.
// Distributed under the Simplified BSD License.
// See README.md for details.
package blackfriday
import (
"bytes"
"fmt"
"io"
"strings"
"unicode/utf8"
)
//
// Markdown parsing and processing
//
// Version string of the package. Appears in the rendered document when
// CompletePage flag is on.
const Version = "2.0"
// Extensions is a bitwise or'ed collection of enabled Blackfriday's
// extensions.
type Extensions int
// These are the supported markdown parsing extensions.
// OR these values together to select multiple extensions.
const (
NoExtensions Extensions = 0
NoIntraEmphasis Extensions = 1 << iota // Ignore emphasis markers inside words
Tables // Render tables
FencedCode // Render fenced code blocks
Autolink // Detect embedded URLs that are not explicitly marked
Strikethrough // Strikethrough text using ~~test~~
LaxHTMLBlocks // Loosen up HTML block parsing rules
SpaceHeadings // Be strict about prefix heading rules
HardLineBreak // Translate newlines into line breaks
TabSizeEight // Expand tabs to eight spaces instead of four
Footnotes // Pandoc-style footnotes
NoEmptyLineBeforeBlock // No need to insert an empty line to start a (code, quote, ordered list, unordered list) block
HeadingIDs // specify heading IDs with {#id}
Titleblock // Titleblock ala pandoc
AutoHeadingIDs // Create the heading ID from the text
BackslashLineBreak // Translate trailing backslashes into line breaks
DefinitionLists // Render definition lists
CommonHTMLFlags HTMLFlags = UseXHTML | Smartypants |
SmartypantsFractions | SmartypantsDashes | SmartypantsLatexDashes
CommonExtensions Extensions = NoIntraEmphasis | Tables | FencedCode |
Autolink | Strikethrough | SpaceHeadings | HeadingIDs |
BackslashLineBreak | DefinitionLists
)
// ListType contains bitwise or'ed flags for list and list item objects.
type ListType int
// These are the possible flag values for the ListItem renderer.
// Multiple flag values may be ORed together.
// These are mostly of interest if you are writing a new output format.
const (
ListTypeOrdered ListType = 1 << iota
ListTypeDefinition
ListTypeTerm
ListItemContainsBlock
ListItemBeginningOfList // TODO: figure out if this is of any use now
ListItemEndOfList
)
// CellAlignFlags holds a type of alignment in a table cell.
type CellAlignFlags int
// These are the possible flag values for the table cell renderer.
// Only a single one of these values will be used; they are not ORed together.
// These are mostly of interest if you are writing a new output format.
const (
TableAlignmentLeft CellAlignFlags = 1 << iota
TableAlignmentRight
TableAlignmentCenter = (TableAlignmentLeft | TableAlignmentRight)
)
// The size of a tab stop.
const (
TabSizeDefault = 4
TabSizeDouble = 8
)
// blockTags is a set of tags that are recognized as HTML block tags.
// Any of these can be included in markdown text without special escaping.
var blockTags = map[string]struct{}{
"blockquote": struct{}{},
"del": struct{}{},
"div": struct{}{},
"dl": struct{}{},
"fieldset": struct{}{},
"form": struct{}{},
"h1": struct{}{},
"h2": struct{}{},
"h3": struct{}{},
"h4": struct{}{},
"h5": struct{}{},
"h6": struct{}{},
"iframe": struct{}{},
"ins": struct{}{},
"math": struct{}{},
"noscript": struct{}{},
"ol": struct{}{},
"pre": struct{}{},
"p": struct{}{},
"script": struct{}{},
"style": struct{}{},
"table": struct{}{},
"ul": struct{}{},
// HTML5
"address": struct{}{},
"article": struct{}{},
"aside": struct{}{},
"canvas": struct{}{},
"figcaption": struct{}{},
"figure": struct{}{},
"footer": struct{}{},
"header": struct{}{},
"hgroup": struct{}{},
"main": struct{}{},
"nav": struct{}{},
"output": struct{}{},
"progress": struct{}{},
"section": struct{}{},
"video": struct{}{},
}
// Renderer is the rendering interface. This is mostly of interest if you are
// implementing a new rendering format.
//
// Only an HTML implementation is provided in this repository, see the README
// for external implementations.
type Renderer interface {
// RenderNode is the main rendering method. It will be called once for
// every leaf node and twice for every non-leaf node (first with
// entering=true, then with entering=false). The method should write its
// rendition of the node to the supplied writer w.
RenderNode(w io.Writer, node *Node, entering bool) WalkStatus
// RenderHeader is a method that allows the renderer to produce some
// content preceding the main body of the output document. The header is
// understood in the broad sense here. For example, the default HTML
// renderer will write not only the HTML document preamble, but also the
// table of contents if it was requested.
//
// The method will be passed an entire document tree, in case a particular
// implementation needs to inspect it to produce output.
//
// The output should be written to the supplied writer w. If your
// implementation has no header to write, supply an empty implementation.
RenderHeader(w io.Writer, ast *Node)
// RenderFooter is a symmetric counterpart of RenderHeader.
RenderFooter(w io.Writer, ast *Node)
}
// Callback functions for inline parsing. One such function is defined
// for each character that triggers a response when parsing inline data.
type inlineParser func(p *Markdown, data []byte, offset int) (int, *Node)
// Markdown is a type that holds extensions and the runtime state used by
// Parse, and the renderer. You can not use it directly, construct it with New.
type Markdown struct {
renderer Renderer
referenceOverride ReferenceOverrideFunc
refs map[string]*reference
inlineCallback [256]inlineParser
extensions Extensions
nesting int
maxNesting int
insideLink bool
// Footnotes need to be ordered as well as available to quickly check for
// presence. If a ref is also a footnote, it's stored both in refs and here
// in notes. Slice is nil if footnotes not enabled.
notes []*reference
doc *Node
tip *Node // = doc
oldTip *Node
lastMatchedContainer *Node // = doc
allClosed bool
}
func (p *Markdown) getRef(refid string) (ref *reference, found bool) {
if p.referenceOverride != nil {
r, overridden := p.referenceOverride(refid)
if overridden {
if r == nil {
return nil, false
}
return &reference{
link: []byte(r.Link),
title: []byte(r.Title),
noteID: 0,
hasBlock: false,
text: []byte(r.Text)}, true
}
}
// refs are case insensitive
ref, found = p.refs[strings.ToLower(refid)]
return ref, found
}
func (p *Markdown) finalize(block *Node) {
above := block.Parent
block.open = false
p.tip = above
}
func (p *Markdown) addChild(node NodeType, offset uint32) *Node {
return p.addExistingChild(NewNode(node), offset)
}
func (p *Markdown) addExistingChild(node *Node, offset uint32) *Node {
for !p.tip.canContain(node.Type) {
p.finalize(p.tip)
}
p.tip.AppendChild(node)
p.tip = node
return node
}
func (p *Markdown) closeUnmatchedBlocks() {
if !p.allClosed {
for p.oldTip != p.lastMatchedContainer {
parent := p.oldTip.Parent
p.finalize(p.oldTip)
p.oldTip = parent
}
p.allClosed = true
}
}
//
//
// Public interface
//
//
// Reference represents the details of a link.
// See the documentation in Options for more details on use-case.
type Reference struct {
// Link is usually the URL the reference points to.
Link string
// Title is the alternate text describing the link in more detail.
Title string
// Text is the optional text to override the ref with if the syntax used was
// [refid][]
Text string
}
// ReferenceOverrideFunc is expected to be called with a reference string and
// return either a valid Reference type that the reference string maps to or
// nil. If overridden is false, the default reference logic will be executed.
// See the documentation in Options for more details on use-case.
type ReferenceOverrideFunc func(reference string) (ref *Reference, overridden bool)
// New constructs a Markdown processor. You can use the same With* functions as
// for Run() to customize parser's behavior and the renderer.
func New(opts ...Option) *Markdown {
var p Markdown
for _, opt := range opts {
opt(&p)
}
p.refs = make(map[string]*reference)
p.maxNesting = 16
p.insideLink = false
docNode := NewNode(Document)
p.doc = docNode
p.tip = docNode
p.oldTip = docNode
p.lastMatchedContainer = docNode
p.allClosed = true
// register inline parsers
p.inlineCallback[' '] = maybeLineBreak
p.inlineCallback['*'] = emphasis
p.inlineCallback['_'] = emphasis
if p.extensions&Strikethrough != 0 {
p.inlineCallback['~'] = emphasis
}
p.inlineCallback['`'] = codeSpan
p.inlineCallback['\n'] = lineBreak
p.inlineCallback['['] = link
p.inlineCallback['<'] = leftAngle
p.inlineCallback['\\'] = escape
p.inlineCallback['&'] = entity
p.inlineCallback['!'] = maybeImage
p.inlineCallback['^'] = maybeInlineFootnote
if p.extensions&Autolink != 0 {
p.inlineCallback['h'] = maybeAutoLink
p.inlineCallback['m'] = maybeAutoLink
p.inlineCallback['f'] = maybeAutoLink
p.inlineCallback['H'] = maybeAutoLink
p.inlineCallback['M'] = maybeAutoLink
p.inlineCallback['F'] = maybeAutoLink
}
if p.extensions&Footnotes != 0 {
p.notes = make([]*reference, 0)
}
return &p
}
// Option customizes the Markdown processor's default behavior.
type Option func(*Markdown)
// WithRenderer allows you to override the default renderer.
func WithRenderer(r Renderer) Option {
return func(p *Markdown) {
p.renderer = r
}
}
// WithExtensions allows you to pick some of the many extensions provided by
// Blackfriday. You can bitwise OR them.
func WithExtensions(e Extensions) Option {
return func(p *Markdown) {
p.extensions = e
}
}
// WithNoExtensions turns off all extensions and custom behavior.
func WithNoExtensions() Option {
return func(p *Markdown) {
p.extensions = NoExtensions
p.renderer = NewHTMLRenderer(HTMLRendererParameters{
Flags: HTMLFlagsNone,
})
}
}
// WithRefOverride sets an optional function callback that is called every
// time a reference is resolved.
//
// In Markdown, the link reference syntax can be made to resolve a link to
// a reference instead of an inline URL, in one of the following ways:
//
// * [link text][refid]
// * [refid][]
//
// Usually, the refid is defined at the bottom of the Markdown document. If
// this override function is provided, the refid is passed to the override
// function first, before consulting the defined refids at the bottom. If
// the override function indicates an override did not occur, the refids at
// the bottom will be used to fill in the link details.
func WithRefOverride(o ReferenceOverrideFunc) Option {
return func(p *Markdown) {
p.referenceOverride = o
}
}
// Run is the main entry point to Blackfriday. It parses and renders a
// block of markdown-encoded text.
//
// The simplest invocation of Run takes one argument, input:
// output := Run(input)
// This will parse the input with CommonExtensions enabled and render it with
// the default HTMLRenderer (with CommonHTMLFlags).
//
// Variadic arguments opts can customize the default behavior. Since Markdown
// type does not contain exported fields, you can not use it directly. Instead,
// use the With* functions. For example, this will call the most basic
// functionality, with no extensions:
// output := Run(input, WithNoExtensions())
//
// You can use any number of With* arguments, even contradicting ones. They
// will be applied in order of appearance and the latter will override the
// former:
// output := Run(input, WithNoExtensions(), WithExtensions(exts),
// WithRenderer(yourRenderer))
func Run(input []byte, opts ...Option) []byte {
r := NewHTMLRenderer(HTMLRendererParameters{
Flags: CommonHTMLFlags,
})
optList := []Option{WithRenderer(r), WithExtensions(CommonExtensions)}
optList = append(optList, opts...)
parser := New(optList...)
ast := parser.Parse(input)
var buf bytes.Buffer
parser.renderer.RenderHeader(&buf, ast)
ast.Walk(func(node *Node, entering bool) WalkStatus {
return parser.renderer.RenderNode(&buf, node, entering)
})
parser.renderer.RenderFooter(&buf, ast)
return buf.Bytes()
}
// Parse is an entry point to the parsing part of Blackfriday. It takes an
// input markdown document and produces a syntax tree for its contents. This
// tree can then be rendered with a default or custom renderer, or
// analyzed/transformed by the caller to whatever non-standard needs they have.
// The return value is the root node of the syntax tree.
func (p *Markdown) Parse(input []byte) *Node {
p.block(input)
// Walk the tree and finish up some of unfinished blocks
for p.tip != nil {
p.finalize(p.tip)
}
// Walk the tree again and process inline markdown in each block
p.doc.Walk(func(node *Node, entering bool) WalkStatus {
if node.Type == Paragraph || node.Type == Heading || node.Type == TableCell {
p.inline(node, node.content)
node.content = nil
}
return GoToNext
})
p.parseRefsToAST()
return p.doc
}
func (p *Markdown) parseRefsToAST() {
if p.extensions&Footnotes == 0 || len(p.notes) == 0 {
return
}
p.tip = p.doc
block := p.addBlock(List, nil)
block.IsFootnotesList = true
block.ListFlags = ListTypeOrdered
flags := ListItemBeginningOfList
// Note: this loop is intentionally explicit, not range-form. This is
// because the body of the loop will append nested footnotes to p.notes and
// we need to process those late additions. Range form would only walk over
// the fixed initial set.
for i := 0; i < len(p.notes); i++ {
ref := p.notes[i]
p.addExistingChild(ref.footnote, 0)
block := ref.footnote
block.ListFlags = flags | ListTypeOrdered
block.RefLink = ref.link
if ref.hasBlock {
flags |= ListItemContainsBlock
p.block(ref.title)
} else {
p.inline(block, ref.title)
}
flags &^= ListItemBeginningOfList | ListItemContainsBlock
}
above := block.Parent
finalizeList(block)
p.tip = above
block.Walk(func(node *Node, entering bool) WalkStatus {
if node.Type == Paragraph || node.Type == Heading {
p.inline(node, node.content)
node.content = nil
}
return GoToNext
})
}
//
// Link references
//
// This section implements support for references that (usually) appear
// as footnotes in a document, and can be referenced anywhere in the document.
// The basic format is:
//
// [1]: http://www.google.com/ "Google"
// [2]: http://www.github.com/ "Github"
//
// Anywhere in the document, the reference can be linked by referring to its
// label, i.e., 1 and 2 in this example, as in:
//
// This library is hosted on [Github][2], a git hosting site.
//
// Actual footnotes as specified in Pandoc and supported by some other Markdown
// libraries such as php-markdown are also taken care of. They look like this:
//
// This sentence needs a bit of further explanation.[^note]
//
// [^note]: This is the explanation.
//
// Footnotes should be placed at the end of the document in an ordered list.
// Inline footnotes such as:
//
// Inline footnotes^[Not supported.] also exist.
//
// are not yet supported.
// reference holds all information necessary for a reference-style links or
// footnotes.
//
// Consider this markdown with reference-style links:
//
// [link][ref]
//
// [ref]: /url/ "tooltip title"
//
// It will be ultimately converted to this HTML:
//
// <p><a href=\"/url/\" title=\"title\">link</a></p>
//
// And a reference structure will be populated as follows:
//
// p.refs["ref"] = &reference{
// link: "/url/",
// title: "tooltip title",
// }
//
// Alternatively, reference can contain information about a footnote. Consider
// this markdown:
//
// Text needing a footnote.[^a]
//
// [^a]: This is the note
//
// A reference structure will be populated as follows:
//
// p.refs["a"] = &reference{
// link: "a",
// title: "This is the note",
// noteID: <some positive int>,
// }
//
// TODO: As you can see, it begs for splitting into two dedicated structures
// for refs and for footnotes.
type reference struct {
link []byte
title []byte
noteID int // 0 if not a footnote ref
hasBlock bool
footnote *Node // a link to the Item node within a list of footnotes
text []byte // only gets populated by refOverride feature with Reference.Text
}
func (r *reference) String() string {
return fmt.Sprintf("{link: %q, title: %q, text: %q, noteID: %d, hasBlock: %v}",
r.link, r.title, r.text, r.noteID, r.hasBlock)
}
// Check whether or not data starts with a reference link.
// If so, it is parsed and stored in the list of references
// (in the render struct).
// Returns the number of bytes to skip to move past it,
// or zero if the first line is not a reference.
func isReference(p *Markdown, data []byte, tabSize int) int {
// up to 3 optional leading spaces
if len(data) < 4 {
return 0
}
i := 0
for i < 3 && data[i] == ' ' {
i++
}
noteID := 0
// id part: anything but a newline between brackets
if data[i] != '[' {
return 0
}
i++
if p.extensions&Footnotes != 0 {
if i < len(data) && data[i] == '^' {
// we can set it to anything here because the proper noteIds will
// be assigned later during the second pass. It just has to be != 0
noteID = 1
i++
}
}
idOffset := i
for i < len(data) && data[i] != '\n' && data[i] != '\r' && data[i] != ']' {
i++
}
if i >= len(data) || data[i] != ']' {
return 0
}
idEnd := i
// footnotes can have empty ID, like this: [^], but a reference can not be
// empty like this: []. Break early if it's not a footnote and there's no ID
if noteID == 0 && idOffset == idEnd {
return 0
}
// spacer: colon (space | tab)* newline? (space | tab)*
i++
if i >= len(data) || data[i] != ':' {
return 0
}
i++
for i < len(data) && (data[i] == ' ' || data[i] == '\t') {
i++
}
if i < len(data) && (data[i] == '\n' || data[i] == '\r') {
i++
if i < len(data) && data[i] == '\n' && data[i-1] == '\r' {
i++
}
}
for i < len(data) && (data[i] == ' ' || data[i] == '\t') {
i++
}
if i >= len(data) {
return 0
}
var (
linkOffset, linkEnd int
titleOffset, titleEnd int
lineEnd int
raw []byte
hasBlock bool
)
if p.extensions&Footnotes != 0 && noteID != 0 {
linkOffset, linkEnd, raw, hasBlock = scanFootnote(p, data, i, tabSize)
lineEnd = linkEnd
} else {
linkOffset, linkEnd, titleOffset, titleEnd, lineEnd = scanLinkRef(p, data, i)
}
if lineEnd == 0 {
return 0
}
// a valid ref has been found
ref := &reference{
noteID: noteID,
hasBlock: hasBlock,
}
if noteID > 0 {
// reusing the link field for the id since footnotes don't have links
ref.link = data[idOffset:idEnd]
// if footnote, it's not really a title, it's the contained text
ref.title = raw
} else {
ref.link = data[linkOffset:linkEnd]
ref.title = data[titleOffset:titleEnd]
}
// id matches are case-insensitive
id := string(bytes.ToLower(data[idOffset:idEnd]))
p.refs[id] = ref
return lineEnd
}
func scanLinkRef(p *Markdown, data []byte, i int) (linkOffset, linkEnd, titleOffset, titleEnd, lineEnd int) {
// link: whitespace-free sequence, optionally between angle brackets
if data[i] == '<' {
i++
}
linkOffset = i
for i < len(data) && data[i] != ' ' && data[i] != '\t' && data[i] != '\n' && data[i] != '\r' {
i++
}
linkEnd = i
if data[linkOffset] == '<' && data[linkEnd-1] == '>' {
linkOffset++
linkEnd--
}
// optional spacer: (space | tab)* (newline | '\'' | '"' | '(' )
for i < len(data) && (data[i] == ' ' || data[i] == '\t') {
i++
}
if i < len(data) && data[i] != '\n' && data[i] != '\r' && data[i] != '\'' && data[i] != '"' && data[i] != '(' {
return
}
// compute end-of-line
if i >= len(data) || data[i] == '\r' || data[i] == '\n' {
lineEnd = i
}
if i+1 < len(data) && data[i] == '\r' && data[i+1] == '\n' {
lineEnd++
}
// optional (space|tab)* spacer after a newline
if lineEnd > 0 {
i = lineEnd + 1
for i < len(data) && (data[i] == ' ' || data[i] == '\t') {
i++
}
}
// optional title: any non-newline sequence enclosed in '"() alone on its line
if i+1 < len(data) && (data[i] == '\'' || data[i] == '"' || data[i] == '(') {
i++
titleOffset = i
// look for EOL
for i < len(data) && data[i] != '\n' && data[i] != '\r' {
i++
}
if i+1 < len(data) && data[i] == '\n' && data[i+1] == '\r' {
titleEnd = i + 1
} else {
titleEnd = i
}
// step back
i--
for i > titleOffset && (data[i] == ' ' || data[i] == '\t') {
i--
}
if i > titleOffset && (data[i] == '\'' || data[i] == '"' || data[i] == ')') {
lineEnd = titleEnd
titleEnd = i
}
}
return
}
// The first bit of this logic is the same as Parser.listItem, but the rest
// is much simpler. This function simply finds the entire block and shifts it
// over by one tab if it is indeed a block (just returns the line if it's not).
// blockEnd is the end of the section in the input buffer, and contents is the
// extracted text that was shifted over one tab. It will need to be rendered at
// the end of the document.
func scanFootnote(p *Markdown, data []byte, i, indentSize int) (blockStart, blockEnd int, contents []byte, hasBlock bool) {
if i == 0 || len(data) == 0 {
return
}
// skip leading whitespace on first line
for i < len(data) && data[i] == ' ' {
i++
}
blockStart = i
// find the end of the line
blockEnd = i
for i < len(data) && data[i-1] != '\n' {
i++
}
// get working buffer
var raw bytes.Buffer
// put the first line into the working buffer
raw.Write(data[blockEnd:i])
blockEnd = i
// process the following lines
containsBlankLine := false
gatherLines:
for blockEnd < len(data) {
i++
// find the end of this line
for i < len(data) && data[i-1] != '\n' {
i++
}
// if it is an empty line, guess that it is part of this item
// and move on to the next line
if p.isEmpty(data[blockEnd:i]) > 0 {
containsBlankLine = true
blockEnd = i
continue
}
n := 0
if n = isIndented(data[blockEnd:i], indentSize); n == 0 {
// this is the end of the block.
// we don't want to include this last line in the index.
break gatherLines
}
// if there were blank lines before this one, insert a new one now
if containsBlankLine {
raw.WriteByte('\n')
containsBlankLine = false
}
// get rid of that first tab, write to buffer
raw.Write(data[blockEnd+n : i])
hasBlock = true
blockEnd = i
}
if data[blockEnd-1] != '\n' {
raw.WriteByte('\n')
}
contents = raw.Bytes()
return
}
//
//
// Miscellaneous helper functions
//
//
// Test if a character is a punctuation symbol.
// Taken from a private function in regexp in the stdlib.
func ispunct(c byte) bool {
for _, r := range []byte("!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~") {
if c == r {
return true
}
}
return false
}
// Test if a character is a whitespace character.
func isspace(c byte) bool {
return c == ' ' || c == '\t' || c == '\n' || c == '\r' || c == '\f' || c == '\v'
}
// Test if a character is letter.
func isletter(c byte) bool {
return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z')
}
// Test if a character is a letter or a digit.
// TODO: check when this is looking for ASCII alnum and when it should use unicode
func isalnum(c byte) bool {
return (c >= '0' && c <= '9') || isletter(c)
}
// Replace tab characters with spaces, aligning to the next TAB_SIZE column.
// always ends output with a newline
func expandTabs(out *bytes.Buffer, line []byte, tabSize int) {
// first, check for common cases: no tabs, or only tabs at beginning of line
i, prefix := 0, 0
slowcase := false
for i = 0; i < len(line); i++ {
if line[i] == '\t' {
if prefix == i {
prefix++
} else {
slowcase = true
break
}
}
}
// no need to decode runes if all tabs are at the beginning of the line
if !slowcase {
for i = 0; i < prefix*tabSize; i++ {
out.WriteByte(' ')
}
out.Write(line[prefix:])
return
}
// the slow case: we need to count runes to figure out how
// many spaces to insert for each tab
column := 0
i = 0
for i < len(line) {
start := i
for i < len(line) && line[i] != '\t' {
_, size := utf8.DecodeRune(line[i:])
i += size
column++
}
if i > start {
out.Write(line[start:i])
}
if i >= len(line) {
break
}
for {
out.WriteByte(' ')
column++
if column%tabSize == 0 {
break
}
}
i++
}
}
// Find if a line counts as indented or not.
// Returns number of characters the indent is (0 = not indented).
func isIndented(data []byte, indentSize int) int {
if len(data) == 0 {
return 0
}
if data[0] == '\t' {
return 1
}
if len(data) < indentSize {
return 0
}
for i := 0; i < indentSize; i++ {
if data[i] != ' ' {
return 0
}
}
return indentSize
}
// Create a url-safe slug for fragments
func slugify(in []byte) []byte {
if len(in) == 0 {
return in
}
out := make([]byte, 0, len(in))
sym := false
for _, ch := range in {
if isalnum(ch) {
sym = false
out = append(out, ch)
} else if sym {
continue
} else {
out = append(out, '-')
sym = true
}
}
var a, b int
var ch byte
for a, ch = range out {
if ch != '-' {
break
}
}
for b = len(out) - 1; b > 0; b-- {
if out[b] != '-' {
break
}
}
return out[a : b+1]
}

View File

@ -1,354 +0,0 @@
package blackfriday
import (
"bytes"
"fmt"
)
// NodeType specifies a type of a single node of a syntax tree. Usually one
// node (and its type) corresponds to a single markdown feature, e.g. emphasis
// or code block.
type NodeType int
// Constants for identifying different types of nodes. See NodeType.
const (
Document NodeType = iota
BlockQuote
List
Item
Paragraph
Heading
HorizontalRule
Emph
Strong
Del
Link
Image
Text
HTMLBlock
CodeBlock
Softbreak
Hardbreak
Code
HTMLSpan
Table
TableCell
TableHead
TableBody
TableRow
)
var nodeTypeNames = []string{
Document: "Document",
BlockQuote: "BlockQuote",
List: "List",
Item: "Item",
Paragraph: "Paragraph",
Heading: "Heading",
HorizontalRule: "HorizontalRule",
Emph: "Emph",
Strong: "Strong",
Del: "Del",
Link: "Link",
Image: "Image",
Text: "Text",
HTMLBlock: "HTMLBlock",
CodeBlock: "CodeBlock",
Softbreak: "Softbreak",
Hardbreak: "Hardbreak",
Code: "Code",
HTMLSpan: "HTMLSpan",
Table: "Table",
TableCell: "TableCell",
TableHead: "TableHead",
TableBody: "TableBody",
TableRow: "TableRow",
}
func (t NodeType) String() string {
return nodeTypeNames[t]
}
// ListData contains fields relevant to a List and Item node type.
type ListData struct {
ListFlags ListType
Tight bool // Skip <p>s around list item data if true
BulletChar byte // '*', '+' or '-' in bullet lists
Delimiter byte // '.' or ')' after the number in ordered lists
RefLink []byte // If not nil, turns this list item into a footnote item and triggers different rendering
IsFootnotesList bool // This is a list of footnotes
}
// LinkData contains fields relevant to a Link node type.
type LinkData struct {
Destination []byte // Destination is what goes into a href
Title []byte // Title is the tooltip thing that goes in a title attribute
NoteID int // NoteID contains a serial number of a footnote, zero if it's not a footnote
Footnote *Node // If it's a footnote, this is a direct link to the footnote Node. Otherwise nil.
}
// CodeBlockData contains fields relevant to a CodeBlock node type.
type CodeBlockData struct {
IsFenced bool // Specifies whether it's a fenced code block or an indented one
Info []byte // This holds the info string
FenceChar byte
FenceLength int
FenceOffset int
}
// TableCellData contains fields relevant to a TableCell node type.
type TableCellData struct {
IsHeader bool // This tells if it's under the header row
Align CellAlignFlags // This holds the value for align attribute
}
// HeadingData contains fields relevant to a Heading node type.
type HeadingData struct {
Level int // This holds the heading level number
HeadingID string // This might hold heading ID, if present
IsTitleblock bool // Specifies whether it's a title block
}
// Node is a single element in the abstract syntax tree of the parsed document.
// It holds connections to the structurally neighboring nodes and, for certain
// types of nodes, additional information that might be needed when rendering.
type Node struct {
Type NodeType // Determines the type of the node
Parent *Node // Points to the parent
FirstChild *Node // Points to the first child, if any
LastChild *Node // Points to the last child, if any
Prev *Node // Previous sibling; nil if it's the first child
Next *Node // Next sibling; nil if it's the last child
Literal []byte // Text contents of the leaf nodes
HeadingData // Populated if Type is Heading
ListData // Populated if Type is List
CodeBlockData // Populated if Type is CodeBlock
LinkData // Populated if Type is Link
TableCellData // Populated if Type is TableCell
content []byte // Markdown content of the block nodes
open bool // Specifies an open block node that has not been finished to process yet
}
// NewNode allocates a node of a specified type.
func NewNode(typ NodeType) *Node {
return &Node{
Type: typ,
open: true,
}
}
func (n *Node) String() string {
ellipsis := ""
snippet := n.Literal
if len(snippet) > 16 {
snippet = snippet[:16]
ellipsis = "..."
}
return fmt.Sprintf("%s: '%s%s'", n.Type, snippet, ellipsis)
}
// Unlink removes node 'n' from the tree.
// It panics if the node is nil.
func (n *Node) Unlink() {
if n.Prev != nil {
n.Prev.Next = n.Next
} else if n.Parent != nil {
n.Parent.FirstChild = n.Next
}
if n.Next != nil {
n.Next.Prev = n.Prev
} else if n.Parent != nil {
n.Parent.LastChild = n.Prev
}
n.Parent = nil
n.Next = nil
n.Prev = nil
}
// AppendChild adds a node 'child' as a child of 'n'.
// It panics if either node is nil.
func (n *Node) AppendChild(child *Node) {
child.Unlink()
child.Parent = n
if n.LastChild != nil {
n.LastChild.Next = child
child.Prev = n.LastChild
n.LastChild = child
} else {
n.FirstChild = child
n.LastChild = child
}
}
// InsertBefore inserts 'sibling' immediately before 'n'.
// It panics if either node is nil.
func (n *Node) InsertBefore(sibling *Node) {
sibling.Unlink()
sibling.Prev = n.Prev
if sibling.Prev != nil {
sibling.Prev.Next = sibling
}
sibling.Next = n
n.Prev = sibling
sibling.Parent = n.Parent
if sibling.Prev == nil {
sibling.Parent.FirstChild = sibling
}
}
func (n *Node) isContainer() bool {
switch n.Type {
case Document:
fallthrough
case BlockQuote:
fallthrough
case List:
fallthrough
case Item:
fallthrough
case Paragraph:
fallthrough
case Heading:
fallthrough
case Emph:
fallthrough
case Strong:
fallthrough
case Del:
fallthrough
case Link:
fallthrough
case Image:
fallthrough
case Table:
fallthrough
case TableHead:
fallthrough
case TableBody:
fallthrough
case TableRow:
fallthrough
case TableCell:
return true
default:
return false
}
}
func (n *Node) canContain(t NodeType) bool {
if n.Type == List {
return t == Item
}
if n.Type == Document || n.Type == BlockQuote || n.Type == Item {
return t != Item
}
if n.Type == Table {
return t == TableHead || t == TableBody
}
if n.Type == TableHead || n.Type == TableBody {
return t == TableRow
}
if n.Type == TableRow {
return t == TableCell
}
return false
}
// WalkStatus allows NodeVisitor to have some control over the tree traversal.
// It is returned from NodeVisitor and different values allow Node.Walk to
// decide which node to go to next.
type WalkStatus int
const (
// GoToNext is the default traversal of every node.
GoToNext WalkStatus = iota
// SkipChildren tells walker to skip all children of current node.
SkipChildren
// Terminate tells walker to terminate the traversal.
Terminate
)
// NodeVisitor is a callback to be called when traversing the syntax tree.
// Called twice for every node: once with entering=true when the branch is
// first visited, then with entering=false after all the children are done.
type NodeVisitor func(node *Node, entering bool) WalkStatus
// Walk is a convenience method that instantiates a walker and starts a
// traversal of subtree rooted at n.
func (n *Node) Walk(visitor NodeVisitor) {
w := newNodeWalker(n)
for w.current != nil {
status := visitor(w.current, w.entering)
switch status {
case GoToNext:
w.next()
case SkipChildren:
w.entering = false
w.next()
case Terminate:
return
}
}
}
type nodeWalker struct {
current *Node
root *Node
entering bool
}
func newNodeWalker(root *Node) *nodeWalker {
return &nodeWalker{
current: root,
root: root,
entering: true,
}
}
func (nw *nodeWalker) next() {
if (!nw.current.isContainer() || !nw.entering) && nw.current == nw.root {
nw.current = nil
return
}
if nw.entering && nw.current.isContainer() {
if nw.current.FirstChild != nil {
nw.current = nw.current.FirstChild
nw.entering = true
} else {
nw.entering = false
}
} else if nw.current.Next == nil {
nw.current = nw.current.Parent
nw.entering = false
} else {
nw.current = nw.current.Next
nw.entering = true
}
}
func dump(ast *Node) {
fmt.Println(dumpString(ast))
}
func dumpR(ast *Node, depth int) string {
if ast == nil {
return ""
}
indent := bytes.Repeat([]byte("\t"), depth)
content := ast.Literal
if content == nil {
content = ast.content
}
result := fmt.Sprintf("%s%s(%q)\n", indent, ast.Type, content)
for n := ast.FirstChild; n != nil; n = n.Next {
result += dumpR(n, depth+1)
}
return result
}
func dumpString(ast *Node) string {
return dumpR(ast, 0)
}

View File

@ -1,457 +0,0 @@
//
// Blackfriday Markdown Processor
// Available at http://github.com/russross/blackfriday
//
// Copyright © 2011 Russ Ross <russ@russross.com>.
// Distributed under the Simplified BSD License.
// See README.md for details.
//
//
//
// SmartyPants rendering
//
//
package blackfriday
import (
"bytes"
"io"
)
// SPRenderer is a struct containing state of a Smartypants renderer.
type SPRenderer struct {
inSingleQuote bool
inDoubleQuote bool
callbacks [256]smartCallback
}
func wordBoundary(c byte) bool {
return c == 0 || isspace(c) || ispunct(c)
}
func tolower(c byte) byte {
if c >= 'A' && c <= 'Z' {
return c - 'A' + 'a'
}
return c
}
func isdigit(c byte) bool {
return c >= '0' && c <= '9'
}
func smartQuoteHelper(out *bytes.Buffer, previousChar byte, nextChar byte, quote byte, isOpen *bool, addNBSP bool) bool {
// edge of the buffer is likely to be a tag that we don't get to see,
// so we treat it like text sometimes
// enumerate all sixteen possibilities for (previousChar, nextChar)
// each can be one of {0, space, punct, other}
switch {
case previousChar == 0 && nextChar == 0:
// context is not any help here, so toggle
*isOpen = !*isOpen
case isspace(previousChar) && nextChar == 0:
// [ "] might be [ "<code>foo...]
*isOpen = true
case ispunct(previousChar) && nextChar == 0:
// [!"] hmm... could be [Run!"] or [("<code>...]
*isOpen = false
case /* isnormal(previousChar) && */ nextChar == 0:
// [a"] is probably a close
*isOpen = false
case previousChar == 0 && isspace(nextChar):
// [" ] might be [...foo</code>" ]
*isOpen = false
case isspace(previousChar) && isspace(nextChar):
// [ " ] context is not any help here, so toggle
*isOpen = !*isOpen
case ispunct(previousChar) && isspace(nextChar):
// [!" ] is probably a close
*isOpen = false
case /* isnormal(previousChar) && */ isspace(nextChar):
// [a" ] this is one of the easy cases
*isOpen = false
case previousChar == 0 && ispunct(nextChar):
// ["!] hmm... could be ["$1.95] or [</code>"!...]
*isOpen = false
case isspace(previousChar) && ispunct(nextChar):
// [ "!] looks more like [ "$1.95]
*isOpen = true
case ispunct(previousChar) && ispunct(nextChar):
// [!"!] context is not any help here, so toggle
*isOpen = !*isOpen
case /* isnormal(previousChar) && */ ispunct(nextChar):
// [a"!] is probably a close
*isOpen = false
case previousChar == 0 /* && isnormal(nextChar) */ :
// ["a] is probably an open
*isOpen = true
case isspace(previousChar) /* && isnormal(nextChar) */ :
// [ "a] this is one of the easy cases
*isOpen = true
case ispunct(previousChar) /* && isnormal(nextChar) */ :
// [!"a] is probably an open
*isOpen = true
default:
// [a'b] maybe a contraction?
*isOpen = false
}
// Note that with the limited lookahead, this non-breaking
// space will also be appended to single double quotes.
if addNBSP && !*isOpen {
out.WriteString("&nbsp;")
}
out.WriteByte('&')
if *isOpen {
out.WriteByte('l')
} else {
out.WriteByte('r')
}
out.WriteByte(quote)
out.WriteString("quo;")
if addNBSP && *isOpen {
out.WriteString("&nbsp;")
}
return true
}
func (r *SPRenderer) smartSingleQuote(out *bytes.Buffer, previousChar byte, text []byte) int {
if len(text) >= 2 {
t1 := tolower(text[1])
if t1 == '\'' {
nextChar := byte(0)
if len(text) >= 3 {
nextChar = text[2]
}
if smartQuoteHelper(out, previousChar, nextChar, 'd', &r.inDoubleQuote, false) {
return 1
}
}
if (t1 == 's' || t1 == 't' || t1 == 'm' || t1 == 'd') && (len(text) < 3 || wordBoundary(text[2])) {
out.WriteString("&rsquo;")
return 0
}
if len(text) >= 3 {
t2 := tolower(text[2])
if ((t1 == 'r' && t2 == 'e') || (t1 == 'l' && t2 == 'l') || (t1 == 'v' && t2 == 'e')) &&
(len(text) < 4 || wordBoundary(text[3])) {
out.WriteString("&rsquo;")
return 0
}
}
}
nextChar := byte(0)
if len(text) > 1 {
nextChar = text[1]
}
if smartQuoteHelper(out, previousChar, nextChar, 's', &r.inSingleQuote, false) {
return 0
}
out.WriteByte(text[0])
return 0
}
func (r *SPRenderer) smartParens(out *bytes.Buffer, previousChar byte, text []byte) int {
if len(text) >= 3 {
t1 := tolower(text[1])
t2 := tolower(text[2])
if t1 == 'c' && t2 == ')' {
out.WriteString("&copy;")
return 2
}
if t1 == 'r' && t2 == ')' {
out.WriteString("&reg;")
return 2
}
if len(text) >= 4 && t1 == 't' && t2 == 'm' && text[3] == ')' {
out.WriteString("&trade;")
return 3
}
}
out.WriteByte(text[0])
return 0
}
func (r *SPRenderer) smartDash(out *bytes.Buffer, previousChar byte, text []byte) int {
if len(text) >= 2 {
if text[1] == '-' {
out.WriteString("&mdash;")
return 1
}
if wordBoundary(previousChar) && wordBoundary(text[1]) {
out.WriteString("&ndash;")
return 0
}
}
out.WriteByte(text[0])
return 0
}
func (r *SPRenderer) smartDashLatex(out *bytes.Buffer, previousChar byte, text []byte) int {
if len(text) >= 3 && text[1] == '-' && text[2] == '-' {
out.WriteString("&mdash;")
return 2
}
if len(text) >= 2 && text[1] == '-' {
out.WriteString("&ndash;")
return 1
}
out.WriteByte(text[0])
return 0
}
func (r *SPRenderer) smartAmpVariant(out *bytes.Buffer, previousChar byte, text []byte, quote byte, addNBSP bool) int {
if bytes.HasPrefix(text, []byte("&quot;")) {
nextChar := byte(0)
if len(text) >= 7 {
nextChar = text[6]
}
if smartQuoteHelper(out, previousChar, nextChar, quote, &r.inDoubleQuote, addNBSP) {
return 5
}
}
if bytes.HasPrefix(text, []byte("&#0;")) {
return 3
}
out.WriteByte('&')
return 0
}
func (r *SPRenderer) smartAmp(angledQuotes, addNBSP bool) func(*bytes.Buffer, byte, []byte) int {
var quote byte = 'd'
if angledQuotes {
quote = 'a'
}
return func(out *bytes.Buffer, previousChar byte, text []byte) int {
return r.smartAmpVariant(out, previousChar, text, quote, addNBSP)
}
}
func (r *SPRenderer) smartPeriod(out *bytes.Buffer, previousChar byte, text []byte) int {
if len(text) >= 3 && text[1] == '.' && text[2] == '.' {
out.WriteString("&hellip;")
return 2
}
if len(text) >= 5 && text[1] == ' ' && text[2] == '.' && text[3] == ' ' && text[4] == '.' {
out.WriteString("&hellip;")
return 4
}
out.WriteByte(text[0])
return 0
}
func (r *SPRenderer) smartBacktick(out *bytes.Buffer, previousChar byte, text []byte) int {
if len(text) >= 2 && text[1] == '`' {
nextChar := byte(0)
if len(text) >= 3 {
nextChar = text[2]
}
if smartQuoteHelper(out, previousChar, nextChar, 'd', &r.inDoubleQuote, false) {
return 1
}
}
out.WriteByte(text[0])
return 0
}
func (r *SPRenderer) smartNumberGeneric(out *bytes.Buffer, previousChar byte, text []byte) int {
if wordBoundary(previousChar) && previousChar != '/' && len(text) >= 3 {
// is it of the form digits/digits(word boundary)?, i.e., \d+/\d+\b
// note: check for regular slash (/) or fraction slash (, 0x2044, or 0xe2 81 84 in utf-8)
// and avoid changing dates like 1/23/2005 into fractions.
numEnd := 0
for len(text) > numEnd && isdigit(text[numEnd]) {
numEnd++
}
if numEnd == 0 {
out.WriteByte(text[0])
return 0
}
denStart := numEnd + 1
if len(text) > numEnd+3 && text[numEnd] == 0xe2 && text[numEnd+1] == 0x81 && text[numEnd+2] == 0x84 {
denStart = numEnd + 3
} else if len(text) < numEnd+2 || text[numEnd] != '/' {
out.WriteByte(text[0])
return 0
}
denEnd := denStart
for len(text) > denEnd && isdigit(text[denEnd]) {
denEnd++
}
if denEnd == denStart {
out.WriteByte(text[0])
return 0
}
if len(text) == denEnd || wordBoundary(text[denEnd]) && text[denEnd] != '/' {
out.WriteString("<sup>")
out.Write(text[:numEnd])
out.WriteString("</sup>&frasl;<sub>")
out.Write(text[denStart:denEnd])
out.WriteString("</sub>")
return denEnd - 1
}
}
out.WriteByte(text[0])
return 0
}
func (r *SPRenderer) smartNumber(out *bytes.Buffer, previousChar byte, text []byte) int {
if wordBoundary(previousChar) && previousChar != '/' && len(text) >= 3 {
if text[0] == '1' && text[1] == '/' && text[2] == '2' {
if len(text) < 4 || wordBoundary(text[3]) && text[3] != '/' {
out.WriteString("&frac12;")
return 2
}
}
if text[0] == '1' && text[1] == '/' && text[2] == '4' {
if len(text) < 4 || wordBoundary(text[3]) && text[3] != '/' || (len(text) >= 5 && tolower(text[3]) == 't' && tolower(text[4]) == 'h') {
out.WriteString("&frac14;")
return 2
}
}
if text[0] == '3' && text[1] == '/' && text[2] == '4' {
if len(text) < 4 || wordBoundary(text[3]) && text[3] != '/' || (len(text) >= 6 && tolower(text[3]) == 't' && tolower(text[4]) == 'h' && tolower(text[5]) == 's') {
out.WriteString("&frac34;")
return 2
}
}
}
out.WriteByte(text[0])
return 0
}
func (r *SPRenderer) smartDoubleQuoteVariant(out *bytes.Buffer, previousChar byte, text []byte, quote byte) int {
nextChar := byte(0)
if len(text) > 1 {
nextChar = text[1]
}
if !smartQuoteHelper(out, previousChar, nextChar, quote, &r.inDoubleQuote, false) {
out.WriteString("&quot;")
}
return 0
}
func (r *SPRenderer) smartDoubleQuote(out *bytes.Buffer, previousChar byte, text []byte) int {
return r.smartDoubleQuoteVariant(out, previousChar, text, 'd')
}
func (r *SPRenderer) smartAngledDoubleQuote(out *bytes.Buffer, previousChar byte, text []byte) int {
return r.smartDoubleQuoteVariant(out, previousChar, text, 'a')
}
func (r *SPRenderer) smartLeftAngle(out *bytes.Buffer, previousChar byte, text []byte) int {
i := 0
for i < len(text) && text[i] != '>' {
i++
}
out.Write(text[:i+1])
return i
}
type smartCallback func(out *bytes.Buffer, previousChar byte, text []byte) int
// NewSmartypantsRenderer constructs a Smartypants renderer object.
func NewSmartypantsRenderer(flags HTMLFlags) *SPRenderer {
var (
r SPRenderer
smartAmpAngled = r.smartAmp(true, false)
smartAmpAngledNBSP = r.smartAmp(true, true)
smartAmpRegular = r.smartAmp(false, false)
smartAmpRegularNBSP = r.smartAmp(false, true)
addNBSP = flags&SmartypantsQuotesNBSP != 0
)
if flags&SmartypantsAngledQuotes == 0 {
r.callbacks['"'] = r.smartDoubleQuote
if !addNBSP {
r.callbacks['&'] = smartAmpRegular
} else {
r.callbacks['&'] = smartAmpRegularNBSP
}
} else {
r.callbacks['"'] = r.smartAngledDoubleQuote
if !addNBSP {
r.callbacks['&'] = smartAmpAngled
} else {
r.callbacks['&'] = smartAmpAngledNBSP
}
}
r.callbacks['\''] = r.smartSingleQuote
r.callbacks['('] = r.smartParens
if flags&SmartypantsDashes != 0 {
if flags&SmartypantsLatexDashes == 0 {
r.callbacks['-'] = r.smartDash
} else {
r.callbacks['-'] = r.smartDashLatex
}
}
r.callbacks['.'] = r.smartPeriod
if flags&SmartypantsFractions == 0 {
r.callbacks['1'] = r.smartNumber
r.callbacks['3'] = r.smartNumber
} else {
for ch := '1'; ch <= '9'; ch++ {
r.callbacks[ch] = r.smartNumberGeneric
}
}
r.callbacks['<'] = r.smartLeftAngle
r.callbacks['`'] = r.smartBacktick
return &r
}
// Process is the entry point of the Smartypants renderer.
func (r *SPRenderer) Process(w io.Writer, text []byte) {
mark := 0
for i := 0; i < len(text); i++ {
if action := r.callbacks[text[i]]; action != nil {
if i > mark {
w.Write(text[mark:i])
}
previousChar := byte(0)
if i > 0 {
previousChar = text[i-1]
}
var tmp bytes.Buffer
i += action(&tmp, previousChar, text[i:])
w.Write(tmp.Bytes())
mark = i + 1
}
}
if mark < len(text) {
w.Write(text[mark:])
}
}

View File

@ -1,3 +0,0 @@
[submodule "vendor/github.com/bmizerany/assert"]
path = vendor/github.com/bmizerany/assert
url = https://github.com/bmizerany/assert

View File

@ -1,80 +0,0 @@
Backo [![GoDoc](http://godoc.org/github.com/segmentio/backo-go?status.png)](http://godoc.org/github.com/segmentio/backo-go)
-----
Exponential backoff for Go (Go port of segmentio/backo).
Usage
-----
```go
import "github.com/segmentio/backo-go"
// Create a Backo instance.
backo := backo.NewBacko(milliseconds(100), 2, 1, milliseconds(10*1000))
// OR with defaults.
backo := backo.DefaultBacko()
// Use the ticker API.
ticker := b.NewTicker()
for {
timeout := time.After(5 * time.Minute)
select {
case <-ticker.C:
fmt.Println("ticked")
case <- timeout:
fmt.Println("timed out")
}
}
// Or simply work with backoff intervals directly.
for i := 0; i < n; i++ {
// Sleep the current goroutine.
backo.Sleep(i)
// Retrieve the duration manually.
duration := backo.Duration(i)
}
```
License
-------
```
WWWWWW||WWWWWW
W W W||W W W
||
( OO )__________
/ | \
/o o| MIT \
\___/||_||__||_|| *
|| || || ||
_||_|| _||_||
(__|__|(__|__|
The MIT License (MIT)
Copyright (c) 2015 Segment, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
[1]: http://github.com/segmentio/backo-java
[2]: http://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=com.segment.backo&a=backo&v=LATEST

View File

@ -1,83 +0,0 @@
package backo
import (
"math"
"math/rand"
"time"
)
type Backo struct {
base time.Duration
factor uint8
jitter float64
cap time.Duration
}
// Creates a backo instance with the given parameters
func NewBacko(base time.Duration, factor uint8, jitter float64, cap time.Duration) *Backo {
return &Backo{base, factor, jitter, cap}
}
// Creates a backo instance with the following defaults:
// base: 100 milliseconds
// factor: 2
// jitter: 0
// cap: 10 seconds
func DefaultBacko() *Backo {
return NewBacko(time.Millisecond*100, 2, 0, time.Second*10)
}
// Duration returns the backoff interval for the given attempt.
func (backo *Backo) Duration(attempt int) time.Duration {
duration := float64(backo.base) * math.Pow(float64(backo.factor), float64(attempt))
if backo.jitter != 0 {
random := rand.Float64()
deviation := math.Floor(random * backo.jitter * duration)
if (int(math.Floor(random*10)) & 1) == 0 {
duration = duration - deviation
} else {
duration = duration + deviation
}
}
duration = math.Min(float64(duration), float64(backo.cap))
return time.Duration(duration)
}
// Sleep pauses the current goroutine for the backoff interval for the given attempt.
func (backo *Backo) Sleep(attempt int) {
duration := backo.Duration(attempt)
time.Sleep(duration)
}
type Ticker struct {
done chan struct{}
C <-chan time.Time
}
func (b *Backo) NewTicker() *Ticker {
c := make(chan time.Time, 1)
ticker := &Ticker{
done: make(chan struct{}, 1),
C: c,
}
go func() {
for i := 0; ; i++ {
select {
case t := <-time.After(b.Duration(i)):
c <- t
case <-ticker.done:
close(c)
return
}
}
}()
return ticker
}
func (t *Ticker) Stop() {
t.done <- struct{}{}
}

View File

@ -1,16 +0,0 @@
sudo: false
language: go
go:
- 1.x
- master
matrix:
allow_failures:
- go: master
fast_finish: true
install:
- # Do nothing. This is needed to prevent default install action "go get -t -v ./..." from happening here (we want it to happen inside script step).
script:
- go get -t -v ./...
- diff -u <(echo -n) <(gofmt -d -s .)
- go tool vet .
- go test -v -race ./...

View File

@ -1,21 +0,0 @@
MIT License
Copyright (c) 2015 Dmitri Shuralyov
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,36 +0,0 @@
sanitized_anchor_name
=====================
[![Build Status](https://travis-ci.org/shurcooL/sanitized_anchor_name.svg?branch=master)](https://travis-ci.org/shurcooL/sanitized_anchor_name) [![GoDoc](https://godoc.org/github.com/shurcooL/sanitized_anchor_name?status.svg)](https://godoc.org/github.com/shurcooL/sanitized_anchor_name)
Package sanitized_anchor_name provides a func to create sanitized anchor names.
Its logic can be reused by multiple packages to create interoperable anchor names
and links to those anchors.
At this time, it does not try to ensure that generated anchor names
are unique, that responsibility falls on the caller.
Installation
------------
```bash
go get -u github.com/shurcooL/sanitized_anchor_name
```
Example
-------
```Go
anchorName := sanitized_anchor_name.Create("This is a header")
fmt.Println(anchorName)
// Output:
// this-is-a-header
```
License
-------
- [MIT License](LICENSE)

View File

@ -1,29 +0,0 @@
// Package sanitized_anchor_name provides a func to create sanitized anchor names.
//
// Its logic can be reused by multiple packages to create interoperable anchor names
// and links to those anchors.
//
// At this time, it does not try to ensure that generated anchor names
// are unique, that responsibility falls on the caller.
package sanitized_anchor_name // import "github.com/shurcooL/sanitized_anchor_name"
import "unicode"
// Create returns a sanitized anchor name for the given text.
func Create(text string) string {
var anchorName []rune
var futureDash = false
for _, r := range text {
switch {
case unicode.IsLetter(r) || unicode.IsNumber(r):
if futureDash && len(anchorName) > 0 {
anchorName = append(anchorName, '-')
}
futureDash = false
anchorName = append(anchorName, unicode.ToLower(r))
default:
futureDash = true
}
}
return string(anchorName)
}

16
vendor/github.com/tj/front/Readme.md generated vendored
View File

@ -1,16 +0,0 @@
# Front
Frontmatter unmarshaller, couldn't find one without a weird API.
## Badges
[![GoDoc](https://godoc.org/github.com/tj/front?status.svg)](https://godoc.org/github.com/tj/front)
![](https://img.shields.io/badge/license-MIT-blue.svg)
![](https://img.shields.io/badge/status-stable-green.svg)
[![](http://apex.sh/images/badge.svg)](https://apex.sh/ping/)
---
> [tjholowaychuk.com](http://tjholowaychuk.com) &nbsp;&middot;&nbsp;
> GitHub [@tj](https://github.com/tj) &nbsp;&middot;&nbsp;
> Twitter [@tjholowaychuk](https://twitter.com/tjholowaychuk)

24
vendor/github.com/tj/front/front.go generated vendored
View File

@ -1,24 +0,0 @@
// Package front provides YAML frontmatter unmarshalling.
package front
import (
"bytes"
"gopkg.in/yaml.v1"
)
// Delimiter.
var delim = []byte("---")
// Unmarshal parses YAML frontmatter and returns the content. When no
// frontmatter delimiters are present the original content is returned.
func Unmarshal(b []byte, v interface{}) (content []byte, err error) {
if !bytes.HasPrefix(b, delim) {
return b, nil
}
parts := bytes.SplitN(b, delim, 3)
content = parts[2]
err = yaml.Unmarshal(parts[1], v)
return
}

View File

@ -1,5 +0,0 @@
# This source file refers to The gocql Authors for copyright purposes.
Christoph Hack <christoph@tux21b.org>
Jonathan Rudenberg <jonathan@titanous.com>
Thorsten von Eicken <tve@rightscale.com>

27
vendor/github.com/xtgo/uuid/LICENSE generated vendored
View File

@ -1,27 +0,0 @@
Copyright (c) 2012 The gocql Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

204
vendor/github.com/xtgo/uuid/uuid.go generated vendored
View File

@ -1,204 +0,0 @@
// Copyright (c) 2012 The gocql Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package uuid can be used to generate and parse universally unique
// identifiers, a standardized format in the form of a 128 bit number.
//
// http://tools.ietf.org/html/rfc4122
package uuid
import (
"crypto/rand"
"encoding/hex"
"errors"
"io"
"net"
"strconv"
"time"
)
type UUID [16]byte
var hardwareAddr []byte
const (
VariantNCSCompat = 0
VariantIETF = 2
VariantMicrosoft = 6
VariantFuture = 7
)
func init() {
if interfaces, err := net.Interfaces(); err == nil {
for _, i := range interfaces {
if i.Flags&net.FlagLoopback == 0 && len(i.HardwareAddr) > 0 {
hardwareAddr = i.HardwareAddr
break
}
}
}
if hardwareAddr == nil {
// If we failed to obtain the MAC address of the current computer,
// we will use a randomly generated 6 byte sequence instead and set
// the multicast bit as recommended in RFC 4122.
hardwareAddr = make([]byte, 6)
_, err := io.ReadFull(rand.Reader, hardwareAddr)
if err != nil {
panic(err)
}
hardwareAddr[0] = hardwareAddr[0] | 0x01
}
}
// Parse parses a 32 digit hexadecimal number (that might contain hyphens)
// representing an UUID.
func Parse(input string) (UUID, error) {
var u UUID
j := 0
for i := 0; i < len(input); i++ {
b := input[i]
switch {
default:
fallthrough
case j == 32:
goto err
case b == '-':
continue
case '0' <= b && b <= '9':
b -= '0'
case 'a' <= b && b <= 'f':
b -= 'a' - 10
case 'A' <= b && b <= 'F':
b -= 'A' - 10
}
u[j/2] |= b << byte(^j&1<<2)
j++
}
if j == 32 {
return u, nil
}
err:
return UUID{}, errors.New("invalid UUID " + strconv.Quote(input))
}
// FromBytes converts a raw byte slice to an UUID. It will panic if the slice
// isn't exactly 16 bytes long.
func FromBytes(input []byte) UUID {
var u UUID
if len(input) != 16 {
panic("UUIDs must be exactly 16 bytes long")
}
copy(u[:], input)
return u
}
// NewRandom generates a totally random UUID (version 4) as described in
// RFC 4122.
func NewRandom() UUID {
var u UUID
io.ReadFull(rand.Reader, u[:])
u[6] &= 0x0F // clear version
u[6] |= 0x40 // set version to 4 (random uuid)
u[8] &= 0x3F // clear variant
u[8] |= 0x80 // set to IETF variant
return u
}
var timeBase = time.Date(1582, time.October, 15, 0, 0, 0, 0, time.UTC).Unix()
// NewTime generates a new time based UUID (version 1) as described in RFC
// 4122. This UUID contains the MAC address of the node that generated the
// UUID, a timestamp and a sequence number.
func NewTime() UUID {
var u UUID
now := time.Now().In(time.UTC)
t := uint64(now.Unix()-timeBase)*10000000 + uint64(now.Nanosecond()/100)
u[0], u[1], u[2], u[3] = byte(t>>24), byte(t>>16), byte(t>>8), byte(t)
u[4], u[5] = byte(t>>40), byte(t>>32)
u[6], u[7] = byte(t>>56)&0x0F, byte(t>>48)
var clockSeq [2]byte
io.ReadFull(rand.Reader, clockSeq[:])
u[8] = clockSeq[1]
u[9] = clockSeq[0]
copy(u[10:], hardwareAddr)
u[6] |= 0x10 // set version to 1 (time based uuid)
u[8] &= 0x3F // clear variant
u[8] |= 0x80 // set to IETF variant
return u
}
// String returns the UUID in it's canonical form, a 32 digit hexadecimal
// number in the form of xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
func (u UUID) String() string {
buf := [36]byte{8: '-', 13: '-', 18: '-', 23: '-'}
hex.Encode(buf[0:], u[0:4])
hex.Encode(buf[9:], u[4:6])
hex.Encode(buf[14:], u[6:8])
hex.Encode(buf[19:], u[8:10])
hex.Encode(buf[24:], u[10:])
return string(buf[:])
}
// Bytes returns the raw byte slice for this UUID. A UUID is always 128 bits
// (16 bytes) long.
func (u UUID) Bytes() []byte {
return u[:]
}
// Variant returns the variant of this UUID. This package will only generate
// UUIDs in the IETF variant.
func (u UUID) Variant() int {
x := u[8]
switch byte(0) {
case x & 0x80:
return VariantNCSCompat
case x & 0x40:
return VariantIETF
case x & 0x20:
return VariantMicrosoft
}
return VariantFuture
}
// Version extracts the version of this UUID variant. The RFC 4122 describes
// five kinds of UUIDs.
func (u UUID) Version() int {
return int(u[6] & 0xF0 >> 4)
}
// Node extracts the MAC address of the node who generated this UUID. It will
// return nil if the UUID is not a time based UUID (version 1).
func (u UUID) Node() []byte {
if u.Version() != 1 {
return nil
}
return u[10:]
}
// Timestamp extracts the timestamp information from a time based UUID
// (version 1).
func (u UUID) Timestamp() uint64 {
if u.Version() != 1 {
return 0
}
return uint64(u[0])<<24 + uint64(u[1])<<16 + uint64(u[2])<<8 +
uint64(u[3]) + uint64(u[4])<<40 + uint64(u[5])<<32 +
uint64(u[7])<<48 + uint64(u[6]&0x0F)<<56
}
// Time is like Timestamp, except that it returns a time.Time.
func (u UUID) Time() time.Time {
t := u.Timestamp()
if t == 0 {
return time.Time{}
}
sec := t / 10000000
nsec := t - sec
return time.Unix(int64(sec)+timeBase, int64(nsec))
}

View File

@ -1,32 +0,0 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof
# Emacs
*~
\#*
.\#*
# Artifacts
tmp/*

View File

@ -1,6 +0,0 @@
[submodule "vendor/github.com/segmentio/backo-go"]
path = vendor/github.com/segmentio/backo-go
url = https://github.com/segmentio/backo-go
[submodule "vendor/github.com/xtgo/uuid"]
path = vendor/github.com/xtgo/uuid
url = https://github.com/xtgo/uuid

View File

@ -1,70 +0,0 @@
v3.0.0 / 2016-06-02
===================
* 3.0 is a significant rewrite with multiple breaking changes.
* [Quickstart](https://segment.com/docs/sources/server/go/quickstart/).
* [Documentation](https://segment.com/docs/sources/server/go/).
* [GoDocs](https://godoc.org/gopkg.in/segmentio/analytics-go.v3).
* [What's New in v3](https://segment.com/docs/sources/server/go/#what-s-new-in-v3).
v2.1.0 / 2015-12-28
===================
* Add ability to set custom timestamps for messages.
* Add ability to set a custom `net/http` client.
* Add ability to set a custom logger.
* Fix edge case when client would try to upload no messages.
* Properly upload in-flight messages when client is asked to shutdown.
* Add ability to set `.integrations` field on messages.
* Fix resource leak with interval ticker after shutdown.
* Add retries and back-off when uploading messages.
* Add ability to set custom flush interval.
v2.0.0 / 2015-02-03
===================
* rewrite with breaking API changes
v1.2.0 / 2014-09-03
==================
* add public .Flush() method
* rename .Stop() to .Close()
v1.1.0 / 2014-09-02
==================
* add client.Stop() to flash/wait. Closes #7
v1.0.0 / 2014-08-26
==================
* fix response close
* change comments to be more go-like
* change uuid libraries
0.1.2 / 2014-06-11
==================
* add runnable example
* fix: close body
0.1.1 / 2014-05-31
==================
* refactor locking
0.1.0 / 2014-05-22
==================
* replace Debug option with debug package
0.0.2 / 2014-05-20
==================
* add .Start()
* add mutexes
* rename BufferSize to FlushAt and FlushInterval to FlushAfter
* lower FlushInterval to 5 seconds
* lower BufferSize to 20 to match other clients

View File

@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2016 Segment, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,26 +0,0 @@
ifndef CIRCLE_ARTIFACTS
CIRCLE_ARTIFACTS=tmp
endif
get:
@go get -v -t ./...
vet:
@go vet ./...
build:
@go build ./...
test:
@mkdir -p ${CIRCLE_ARTIFACTS}
@go test -race -coverprofile=${CIRCLE_ARTIFACTS}/cover.out .
@go tool cover -func ${CIRCLE_ARTIFACTS}/cover.out -o ${CIRCLE_ARTIFACTS}/cover.txt
@go tool cover -html ${CIRCLE_ARTIFACTS}/cover.out -o ${CIRCLE_ARTIFACTS}/cover.html
ci: get vet test
@if [ "$(RUN_E2E_TESTS)" != "true" ]; then \
echo "Skipping end to end tests."; else \
go get github.com/segmentio/library-e2e-tester/cmd/tester; \
tester -segment-write-key=$(SEGMENT_WRITE_KEY) -webhook-auth-username=$(WEBHOOK_AUTH_USERNAME) -webhook-bucket=$(WEBHOOK_BUCKET) -path='cli'; fi
.PHONY: get vet build test ci

View File

@ -1,55 +0,0 @@
# analytics-go [![Circle CI](https://circleci.com/gh/segmentio/analytics-go/tree/master.svg?style=shield)](https://circleci.com/gh/segmentio/analytics-go/tree/master) [![go-doc](https://godoc.org/github.com/segmentio/analytics-go?status.svg)](https://godoc.org/github.com/segmentio/analytics-go)
Segment analytics client for Go.
## Installation
The package can be simply installed via go get, we recommend that you use a
package version management system like the Go vendor directory or a tool like
Godep to avoid issues related to API breaking changes introduced between major
versions of the library.
To install it in the GOPATH:
```
go get https://github.com/segmentio/analytics-go
```
## Documentation
The links bellow should provide all the documentation needed to make the best
use of the library and the Segment API:
- [Documentation](https://segment.com/docs/libraries/go/)
- [godoc](https://godoc.org/gopkg.in/segmentio/analytics-go.v3)
- [API](https://segment.com/docs/libraries/http/)
- [Specs](https://segment.com/docs/spec/)
## Usage
```go
package main
import (
"os"
"github.com/segmentio/analytics-go"
)
func main() {
// Instantiates a client to use send messages to the segment API.
client := analytics.New(os.Getenv("SEGMENT_WRITE_KEY"))
// Enqueues a track event that will be sent asynchronously.
client.Enqueue(analytics.Track{
UserId: "test-user",
Event: "test-snippet",
})
// Flushes any queued messages and closes the client.
client.Close()
}
```
## License
The library is released under the [MIT license](License.md).

View File

@ -1,38 +0,0 @@
package analytics
import "time"
// This type represents object sent in a alias call as described in
// https://segment.com/docs/libraries/http/#alias
type Alias struct {
// This field is exported for serialization purposes and shouldn't be set by
// the application, its value is always overwritten by the library.
Type string `json:"type,omitempty"`
MessageId string `json:"messageId,omitempty"`
PreviousId string `json:"previousId"`
UserId string `json:"userId"`
Timestamp time.Time `json:"timestamp,omitempty"`
Context *Context `json:"context,omitempty"`
Integrations Integrations `json:"integrations,omitempty"`
}
func (msg Alias) validate() error {
if len(msg.UserId) == 0 {
return FieldError{
Type: "analytics.Alias",
Name: "UserId",
Value: msg.UserId,
}
}
if len(msg.PreviousId) == 0 {
return FieldError{
Type: "analytics.Alias",
Name: "PreviousId",
Value: msg.PreviousId,
}
}
return nil
}

View File

@ -1,388 +0,0 @@
package analytics
import (
"fmt"
"io"
"io/ioutil"
"sync"
"bytes"
"encoding/json"
"net/http"
"time"
)
// Version of the client.
const Version = "3.0.0"
// This interface is the main API exposed by the analytics package.
// Values that satsify this interface are returned by the client constructors
// provided by the package and provide a way to send messages via the HTTP API.
type Client interface {
io.Closer
// Queues a message to be sent by the client when the conditions for a batch
// upload are met.
// This is the main method you'll be using, a typical flow would look like
// this:
//
// client := analytics.New(writeKey)
// ...
// client.Enqueue(analytics.Track{ ... })
// ...
// client.Close()
//
// The method returns an error if the message queue not be queued, which
// happens if the client was already closed at the time the method was
// called or if the message was malformed.
Enqueue(Message) error
}
type client struct {
Config
key string
// This channel is where the `Enqueue` method writes messages so they can be
// picked up and pushed by the backend goroutine taking care of applying the
// batching rules.
msgs chan Message
// These two channels are used to synchronize the client shutting down when
// `Close` is called.
// The first channel is closed to signal the backend goroutine that it has
// to stop, then the second one is closed by the backend goroutine to signal
// that it has finished flushing all queued messages.
quit chan struct{}
shutdown chan struct{}
// This HTTP client is used to send requests to the backend, it uses the
// HTTP transport provided in the configuration.
http http.Client
}
// Instantiate a new client that uses the write key passed as first argument to
// send messages to the backend.
// The client is created with the default configuration.
func New(writeKey string) Client {
// Here we can ignore the error because the default config is always valid.
c, _ := NewWithConfig(writeKey, Config{})
return c
}
// Instantiate a new client that uses the write key and configuration passed as
// arguments to send messages to the backend.
// The function will return an error if the configuration contained impossible
// values (like a negative flush interval for example).
// When the function returns an error the returned client will always be nil.
func NewWithConfig(writeKey string, config Config) (cli Client, err error) {
if err = config.validate(); err != nil {
return
}
c := &client{
Config: makeConfig(config),
key: writeKey,
msgs: make(chan Message, 100),
quit: make(chan struct{}),
shutdown: make(chan struct{}),
http: makeHttpClient(config.Transport),
}
go c.loop()
cli = c
return
}
func makeHttpClient(transport http.RoundTripper) http.Client {
httpClient := http.Client{
Transport: transport,
}
if supportsTimeout(transport) {
httpClient.Timeout = 10 * time.Second
}
return httpClient
}
func (c *client) Enqueue(msg Message) (err error) {
if err = msg.validate(); err != nil {
return
}
var id = c.uid()
var ts = c.now()
switch m := msg.(type) {
case Alias:
m.Type = "alias"
m.MessageId = makeMessageId(m.MessageId, id)
m.Timestamp = makeTimestamp(m.Timestamp, ts)
msg = m
case Group:
m.Type = "group"
m.MessageId = makeMessageId(m.MessageId, id)
m.Timestamp = makeTimestamp(m.Timestamp, ts)
msg = m
case Identify:
m.Type = "identify"
m.MessageId = makeMessageId(m.MessageId, id)
m.Timestamp = makeTimestamp(m.Timestamp, ts)
msg = m
case Page:
m.Type = "page"
m.MessageId = makeMessageId(m.MessageId, id)
m.Timestamp = makeTimestamp(m.Timestamp, ts)
msg = m
case Screen:
m.Type = "screen"
m.MessageId = makeMessageId(m.MessageId, id)
m.Timestamp = makeTimestamp(m.Timestamp, ts)
msg = m
case Track:
m.Type = "track"
m.MessageId = makeMessageId(m.MessageId, id)
m.Timestamp = makeTimestamp(m.Timestamp, ts)
msg = m
}
defer func() {
// When the `msgs` channel is closed writing to it will trigger a panic.
// To avoid letting the panic propagate to the caller we recover from it
// and instead report that the client has been closed and shouldn't be
// used anymore.
if recover() != nil {
err = ErrClosed
}
}()
c.msgs <- msg
return
}
// Close and flush metrics.
func (c *client) Close() (err error) {
defer func() {
// Always recover, a panic could be raised if `c`.quit was closed which
// means the method was called more than once.
if recover() != nil {
err = ErrClosed
}
}()
close(c.quit)
<-c.shutdown
return
}
// Asychronously send a batched requests.
func (c *client) sendAsync(msgs []message, wg *sync.WaitGroup, ex *executor) {
wg.Add(1)
if !ex.do(func() {
defer wg.Done()
defer func() {
// In case a bug is introduced in the send function that triggers
// a panic, we don't want this to ever crash the application so we
// catch it here and log it instead.
if err := recover(); err != nil {
c.errorf("panic - %s", err)
}
}()
c.send(msgs)
}) {
wg.Done()
c.errorf("sending messages failed - %s", ErrTooManyRequests)
c.notifyFailure(msgs, ErrTooManyRequests)
}
}
// Send batch request.
func (c *client) send(msgs []message) {
const attempts = 10
b, err := json.Marshal(batch{
MessageId: c.uid(),
SentAt: c.now(),
Messages: msgs,
Context: c.DefaultContext,
})
if err != nil {
c.errorf("marshalling messages - %s", err)
c.notifyFailure(msgs, err)
return
}
for i := 0; i != attempts; i++ {
if err = c.upload(b); err == nil {
c.notifySuccess(msgs)
return
}
// Wait for either a retry timeout or the client to be closed.
select {
case <-time.After(c.RetryAfter(i)):
case <-c.quit:
c.errorf("%d messages dropped because they failed to be sent and the client was closed", len(msgs))
c.notifyFailure(msgs, err)
return
}
}
c.errorf("%d messages dropped because they failed to be sent after %d attempts", len(msgs), attempts)
c.notifyFailure(msgs, err)
}
// Upload serialized batch message.
func (c *client) upload(b []byte) error {
url := c.Endpoint + "/v1/batch"
req, err := http.NewRequest("POST", url, bytes.NewReader(b))
if err != nil {
c.errorf("creating request - %s", err)
return err
}
req.Header.Add("User-Agent", "analytics-go (version: "+Version+")")
req.Header.Add("Content-Type", "application/json")
req.Header.Add("Content-Length", string(len(b)))
req.SetBasicAuth(c.key, "")
res, err := c.http.Do(req)
if err != nil {
c.errorf("sending request - %s", err)
return err
}
defer res.Body.Close()
return c.report(res)
}
// Report on response body.
func (c *client) report(res *http.Response) (err error) {
var body []byte
if res.StatusCode < 300 {
c.debugf("response %s", res.Status)
return
}
if body, err = ioutil.ReadAll(res.Body); err != nil {
c.errorf("response %d %s - %s", res.StatusCode, res.Status, err)
return
}
c.logf("response %d %s %s", res.StatusCode, res.Status, string(body))
return fmt.Errorf("%d %s", res.StatusCode, res.Status)
}
// Batch loop.
func (c *client) loop() {
defer close(c.shutdown)
wg := &sync.WaitGroup{}
defer wg.Wait()
tick := time.NewTicker(c.Interval)
defer tick.Stop()
ex := newExecutor(c.maxConcurrentRequests)
defer ex.close()
mq := messageQueue{
maxBatchSize: c.BatchSize,
maxBatchBytes: c.maxBatchBytes(),
}
for {
select {
case msg := <-c.msgs:
c.push(&mq, msg, wg, ex)
case <-tick.C:
c.flush(&mq, wg, ex)
case <-c.quit:
c.debugf("exit requested draining messages")
// Drain the msg channel, we have to close it first so no more
// messages can be pushed and otherwise the loop would never end.
close(c.msgs)
for msg := range c.msgs {
c.push(&mq, msg, wg, ex)
}
c.flush(&mq, wg, ex)
c.debugf("exit")
return
}
}
}
func (c *client) push(q *messageQueue, m Message, wg *sync.WaitGroup, ex *executor) {
var msg message
var err error
if msg, err = makeMessage(m, maxMessageBytes); err != nil {
c.errorf("%s - %v", err, m)
c.notifyFailure([]message{{m, nil}}, err)
return
}
c.debugf("buffer (%d/%d) %v", len(q.pending), c.BatchSize, m)
if msgs := q.push(msg); msgs != nil {
c.debugf("exceeded messages batch limit with batch of %d messages flushing", len(msgs))
c.sendAsync(msgs, wg, ex)
}
}
func (c *client) flush(q *messageQueue, wg *sync.WaitGroup, ex *executor) {
if msgs := q.flush(); msgs != nil {
c.debugf("flushing %d messages", len(msgs))
c.sendAsync(msgs, wg, ex)
}
}
func (c *client) debugf(format string, args ...interface{}) {
if c.Verbose {
c.logf(format, args...)
}
}
func (c *client) logf(format string, args ...interface{}) {
c.Logger.Logf(format, args...)
}
func (c *client) errorf(format string, args ...interface{}) {
c.Logger.Errorf(format, args...)
}
func (c *client) maxBatchBytes() int {
b, _ := json.Marshal(batch{
MessageId: c.uid(),
SentAt: c.now(),
Context: c.DefaultContext,
})
return maxBatchBytes - len(b)
}
func (c *client) notifySuccess(msgs []message) {
if c.Callback != nil {
for _, m := range msgs {
c.Callback.Success(m.msg)
}
}
}
func (c *client) notifyFailure(msgs []message, err error) {
if c.Callback != nil {
for _, m := range msgs {
c.Callback.Failure(m.msg, err)
}
}
}

View File

@ -1,173 +0,0 @@
package analytics
import (
"net/http"
"time"
"github.com/segmentio/backo-go"
"github.com/xtgo/uuid"
)
// Instances of this type carry the different configuration options that may
// be set when instantiating a client.
//
// Each field's zero-value is either meaningful or interpreted as using the
// default value defined by the library.
type Config struct {
// The endpoint to which the client connect and send their messages, set to
// `DefaultEndpoint` by default.
Endpoint string
// The flushing interval of the client. Messages will be sent when they've
// been queued up to the maximum batch size or when the flushing interval
// timer triggers.
Interval time.Duration
// The HTTP transport used by the client, this allows an application to
// redefine how requests are being sent at the HTTP level (for example,
// to change the connection pooling policy).
// If none is specified the client uses `http.DefaultTransport`.
Transport http.RoundTripper
// The logger used by the client to output info or error messages when that
// are generated by background operations.
// If none is specified the client uses a standard logger that outputs to
// `os.Stderr`.
Logger Logger
// The callback object that will be used by the client to notify the
// application when messages sends to the backend API succeeded or failed.
Callback Callback
// The maximum number of messages that will be sent in one API call.
// Messages will be sent when they've been queued up to the maximum batch
// size or when the flushing interval timer triggers.
// Note that the API will still enforce a 500KB limit on each HTTP request
// which is independent from the number of embedded messages.
BatchSize int
// When set to true the client will send more frequent and detailed messages
// to its logger.
Verbose bool
// The default context set on each message sent by the client.
DefaultContext *Context
// The retry policy used by the client to resend requests that have failed.
// The function is called with how many times the operation has been retried
// and is expected to return how long the client should wait before trying
// again.
// If not set the client will fallback to use a default retry policy.
RetryAfter func(int) time.Duration
// A function called by the client to generate unique message identifiers.
// The client uses a UUID generator if none is provided.
// This field is not exported and only exposed internally to let unit tests
// mock the id generation.
uid func() string
// A function called by the client to get the current time, `time.Now` is
// used by default.
// This field is not exported and only exposed internally to let unit tests
// mock the current time.
now func() time.Time
// The maximum number of goroutines that will be spawned by a client to send
// requests to the backend API.
// This field is not exported and only exposed internally to let unit tests
// mock the current time.
maxConcurrentRequests int
}
// This constant sets the default endpoint to which client instances send
// messages if none was explictly set.
const DefaultEndpoint = "https://api.segment.io"
// This constant sets the default flush interval used by client instances if
// none was explicitly set.
const DefaultInterval = 5 * time.Second
// This constant sets the default batch size used by client instances if none
// was explicitly set.
const DefaultBatchSize = 250
// Verifies that fields that don't have zero-values are set to valid values,
// returns an error describing the problem if a field was invalid.
func (c *Config) validate() error {
if c.Interval < 0 {
return ConfigError{
Reason: "negative time intervals are not supported",
Field: "Interval",
Value: c.Interval,
}
}
if c.BatchSize < 0 {
return ConfigError{
Reason: "negative batch sizes are not supported",
Field: "BatchSize",
Value: c.BatchSize,
}
}
return nil
}
// Given a config object as argument the function will set all zero-values to
// their defaults and return the modified object.
func makeConfig(c Config) Config {
if len(c.Endpoint) == 0 {
c.Endpoint = DefaultEndpoint
}
if c.Interval == 0 {
c.Interval = DefaultInterval
}
if c.Transport == nil {
c.Transport = http.DefaultTransport
}
if c.Logger == nil {
c.Logger = newDefaultLogger()
}
if c.BatchSize == 0 {
c.BatchSize = DefaultBatchSize
}
if c.DefaultContext == nil {
c.DefaultContext = &Context{}
}
if c.RetryAfter == nil {
c.RetryAfter = backo.DefaultBacko().Duration
}
if c.uid == nil {
c.uid = uid
}
if c.now == nil {
c.now = time.Now
}
if c.maxConcurrentRequests == 0 {
c.maxConcurrentRequests = 1000
}
// We always overwrite the 'library' field of the default context set on the
// client because we want this information to be accurate.
c.DefaultContext.Library = LibraryInfo{
Name: "analytics-go",
Version: Version,
}
return c
}
// This function returns a string representation of a UUID, it's the default
// function used for generating unique IDs.
func uid() string {
return uuid.NewRandom().String()
}

View File

@ -1,148 +0,0 @@
package analytics
import (
"encoding/json"
"net"
"reflect"
)
// This type provides the representation of the `context` object as defined in
// https://segment.com/docs/spec/common/#context
type Context struct {
App AppInfo `json:"app,omitempty"`
Campaign CampaignInfo `json:"campaign,omitempty"`
Device DeviceInfo `json:"device,omitempty"`
Library LibraryInfo `json:"library,omitempty"`
Location LocationInfo `json:"location,omitempty"`
Network NetworkInfo `json:"network,omitempty"`
OS OSInfo `json:"os,omitempty"`
Page PageInfo `json:"page,omitempty"`
Referrer ReferrerInfo `json:"referrer,omitempty"`
Screen ScreenInfo `json:"screen,omitempty"`
IP net.IP `json:"ip,omitempty"`
Locale string `json:"locale,omitempty"`
Timezone string `json:"timezone,omitempty"`
UserAgent string `json:"userAgent,omitempty"`
Traits Traits `json:"traits,omitempty"`
// This map is used to allow extensions to the context specifications that
// may not be documented or could be introduced in the future.
// The fields of this map are inlined in the serialized context object,
// there is no actual "extra" field in the JSON representation.
Extra map[string]interface{} `json:"-"`
}
// This type provides the representation of the `context.app` object as defined
// in https://segment.com/docs/spec/common/#context
type AppInfo struct {
Name string `json:"name,omitempty"`
Version string `json:"version,omitempty"`
Build string `json:"build,omitempty"`
Namespace string `json:"namespace,omitempty"`
}
// This type provides the representation of the `context.campaign` object as
// defined in https://segment.com/docs/spec/common/#context
type CampaignInfo struct {
Name string `json:"name,omitempty"`
Source string `json:"source,omitempty"`
Medium string `json:"medium,omitempty"`
Term string `json:"term,omitempty"`
Content string `json:"content,omitempty"`
}
// This type provides the representation of the `context.device` object as
// defined in https://segment.com/docs/spec/common/#context
type DeviceInfo struct {
Id string `json:"id,omitempty"`
Manufacturer string `json:"manufacturer,omitempty"`
Model string `json:"model,omitempty"`
Name string `json:"name,omitempty"`
Type string `json:"type,omitempty"`
Version string `json:"version,omitempty"`
AdvertisingID string `json:"advertisingId,omitempty"`
}
// This type provides the representation of the `context.library` object as
// defined in https://segment.com/docs/spec/common/#context
type LibraryInfo struct {
Name string `json:"name,omitempty"`
Version string `json:"version,omitempty"`
}
// This type provides the representation of the `context.location` object as
// defined in https://segment.com/docs/spec/common/#context
type LocationInfo struct {
City string `json:"city,omitempty"`
Country string `json:"country,omitempty"`
Region string `json:"region,omitempty"`
Latitude float64 `json:"latitude,omitempty"`
Longitude float64 `json:"longitude,omitempty"`
Speed float64 `json:"speed,omitempty"`
}
// This type provides the representation of the `context.network` object as
// defined in https://segment.com/docs/spec/common/#context
type NetworkInfo struct {
Bluetooth bool `json:"bluetooth,omitempty"`
Cellular bool `json:"cellular,omitempty"`
WIFI bool `json:"wifi,omitempty"`
Carrier string `json:"carrier,omitempty"`
}
// This type provides the representation of the `context.os` object as defined
// in https://segment.com/docs/spec/common/#context
type OSInfo struct {
Name string `json:"name,omitempty"`
Version string `json:"version,omitempty"`
}
// This type provides the representation of the `context.page` object as
// defined in https://segment.com/docs/spec/common/#context
type PageInfo struct {
Hash string `json:"hash,omitempty"`
Path string `json:"path,omitempty"`
Referrer string `json:"referrer,omitempty"`
Search string `json:"search,omitempty"`
Title string `json:"title,omitempty"`
URL string `json:"url,omitempty"`
}
// This type provides the representation of the `context.referrer` object as
// defined in https://segment.com/docs/spec/common/#context
type ReferrerInfo struct {
Type string `json:"type,omitempty"`
Name string `json:"name,omitempty"`
URL string `json:"url,omitempty"`
Link string `json:"link,omitempty"`
}
// This type provides the representation of the `context.screen` object as
// defined in https://segment.com/docs/spec/common/#context
type ScreenInfo struct {
Density int `json:"density,omitempty"`
Width int `json:"width,omitempty"`
Height int `json:"height,omitempty"`
}
// Satisfy the `json.Marshaler` interface. We have to flatten out the `Extra`
// field but the standard json package doesn't support it yet.
// Implementing this interface allows us to override the default marshaling of
// the context object and to the inlining ourselves.
//
// Related discussion: https://github.com/golang/go/issues/6213
func (ctx Context) MarshalJSON() ([]byte, error) {
v := reflect.ValueOf(ctx)
n := v.NumField()
m := make(map[string]interface{}, n+len(ctx.Extra))
// Copy the `Extra` map into the map representation of the context, it is
// important to do this operation before going through the actual struct
// fields so the latter take precendence and override duplicated values
// that would be set in the extensions.
for name, value := range ctx.Extra {
m[name] = value
}
return json.Marshal(structToMap(v, m))
}

View File

@ -1,60 +0,0 @@
package analytics
import (
"errors"
"fmt"
)
// Returned by the `NewWithConfig` function when the one of the configuration
// fields was set to an impossible value (like a negative duration).
type ConfigError struct {
// A human-readable message explaining why the configuration field's value
// is invalid.
Reason string
// The name of the configuration field that was carrying an invalid value.
Field string
// The value of the configuration field that caused the error.
Value interface{}
}
func (e ConfigError) Error() string {
return fmt.Sprintf("analytics.NewWithConfig: %s (analytics.Config.%s: %#v)", e.Reason, e.Field, e.Value)
}
// Instances of this type are used to represent errors returned when a field was
// no initialize properly in a structure passed as argument to one of the
// functions of this package.
type FieldError struct {
// The human-readable representation of the type of structure that wasn't
// initialized properly.
Type string
// The name of the field that wasn't properly initialized.
Name string
// The value of the field that wasn't properly initialized.
Value interface{}
}
func (e FieldError) Error() string {
return fmt.Sprintf("%s.%s: invalid field value: %#v", e.Type, e.Name, e.Value)
}
var (
// This error is returned by methods of the `Client` interface when they are
// called after the client was already closed.
ErrClosed = errors.New("the client was already closed")
// This error is used to notify the application that too many requests are
// already being sent and no more messages can be accepted.
ErrTooManyRequests = errors.New("too many requests are already in-flight")
// This error is used to notify the client callbacks that a message send
// failed because the JSON representation of a message exceeded the upper
// limit.
ErrMessageTooBig = errors.New("the message exceeds the maximum allowed size")
)

View File

@ -1,53 +0,0 @@
package analytics
import "sync"
type executor struct {
queue chan func()
mutex sync.Mutex
size int
cap int
}
func newExecutor(cap int) *executor {
e := &executor{
queue: make(chan func(), 1),
cap: cap,
}
go e.loop()
return e
}
func (e *executor) do(task func()) (ok bool) {
e.mutex.Lock()
if e.size != e.cap {
e.queue <- task
e.size++
ok = true
}
e.mutex.Unlock()
return
}
func (e *executor) close() {
close(e.queue)
}
func (e *executor) loop() {
for task := range e.queue {
go e.run(task)
}
}
func (e *executor) run(task func()) {
defer e.done()
task()
}
func (e *executor) done() {
e.mutex.Lock()
e.size--
e.mutex.Unlock()
}

View File

@ -1,40 +0,0 @@
package analytics
import "time"
// This type represents object sent in a group call as described in
// https://segment.com/docs/libraries/http/#group
type Group struct {
// This field is exported for serialization purposes and shouldn't be set by
// the application, its value is always overwritten by the library.
Type string `json:"type,omitempty"`
MessageId string `json:"messageId,omitempty"`
AnonymousId string `json:"anonymousId,omitempty"`
UserId string `json:"userId,omitempty"`
GroupId string `json:"groupId"`
Timestamp time.Time `json:"timestamp,omitempty"`
Context *Context `json:"context,omitempty"`
Traits Traits `json:"traits,omitempty"`
Integrations Integrations `json:"integrations,omitempty"`
}
func (msg Group) validate() error {
if len(msg.GroupId) == 0 {
return FieldError{
Type: "analytics.Group",
Name: "GroupId",
Value: msg.GroupId,
}
}
if len(msg.UserId) == 0 && len(msg.AnonymousId) == 0 {
return FieldError{
Type: "analytics.Group",
Name: "UserId",
Value: msg.UserId,
}
}
return nil
}

View File

@ -1,31 +0,0 @@
package analytics
import "time"
// This type represents object sent in an identify call as described in
// https://segment.com/docs/libraries/http/#identify
type Identify struct {
// This field is exported for serialization purposes and shouldn't be set by
// the application, its value is always overwritten by the library.
Type string `json:"type,omitempty"`
MessageId string `json:"messageId,omitempty"`
AnonymousId string `json:"anonymousId,omitempty"`
UserId string `json:"userId,omitempty"`
Timestamp time.Time `json:"timestamp,omitempty"`
Context *Context `json:"context,omitempty"`
Traits Traits `json:"traits,omitempty"`
Integrations Integrations `json:"integrations,omitempty"`
}
func (msg Identify) validate() error {
if len(msg.UserId) == 0 && len(msg.AnonymousId) == 0 {
return FieldError{
Type: "analytics.Identify",
Name: "UserId",
Value: msg.UserId,
}
}
return nil
}

View File

@ -1,44 +0,0 @@
package analytics
// This type is used to represent integrations in messages that support it.
// It is a free-form where values are most often booleans that enable or
// disable integrations.
// Here's a quick example of how this type is meant to be used:
//
// analytics.Track{
// UserId: "0123456789",
// Integrations: analytics.NewIntegrations()
// .EnableAll()
// .Disable("Salesforce")
// .Disable("Marketo"),
// }
//
// The specifications can be found at https://segment.com/docs/spec/common/#integrations
type Integrations map[string]interface{}
func NewIntegrations() Integrations {
return make(Integrations, 10)
}
func (i Integrations) EnableAll() Integrations {
return i.Enable("all")
}
func (i Integrations) DisableAll() Integrations {
return i.Disable("all")
}
func (i Integrations) Enable(name string) Integrations {
return i.Set(name, true)
}
func (i Integrations) Disable(name string) Integrations {
return i.Set(name, false)
}
// Sets an integration named by the first argument to the specified value, any
// value other than `false` will be interpreted as enabling the integration.
func (i Integrations) Set(name string, value interface{}) Integrations {
i[name] = value
return i
}

View File

@ -1,87 +0,0 @@
package analytics
import (
"reflect"
"strings"
)
// Imitate what what the JSON package would do when serializing a struct value,
// the only difference is we we don't serialize zero-value struct fields as well.
// Note that this function doesn't recursively convert structures to maps, only
// the value passed as argument is transformed.
func structToMap(v reflect.Value, m map[string]interface{}) map[string]interface{} {
t := v.Type()
n := t.NumField()
if m == nil {
m = make(map[string]interface{}, n)
}
for i := 0; i != n; i++ {
field := t.Field(i)
value := v.Field(i)
name, omitempty := parseJsonTag(field.Tag.Get("json"), field.Name)
if name != "-" && !(omitempty && isZeroValue(value)) {
m[name] = value.Interface()
}
}
return m
}
// Parses a JSON tag the way the json package would do it, returing the expected
// name of the field once serialized and if empty values should be omitted.
func parseJsonTag(tag string, defName string) (name string, omitempty bool) {
args := strings.Split(tag, ",")
if len(args) == 0 || len(args[0]) == 0 {
name = defName
} else {
name = args[0]
}
if len(args) > 1 && args[1] == "omitempty" {
omitempty = true
}
return
}
// Checks if the value given as argument is a zero-value, it is based on the
// isEmptyValue function in https://golang.org/src/encoding/json/encode.go
// but also checks struct types recursively.
func isZeroValue(v reflect.Value) bool {
switch v.Kind() {
case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
return v.Len() == 0
case reflect.Bool:
return !v.Bool()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return v.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return v.Uint() == 0
case reflect.Float32, reflect.Float64:
return v.Float() == 0
case reflect.Interface, reflect.Ptr:
return v.IsNil()
case reflect.Struct:
for i, n := 0, v.NumField(); i != n; i++ {
if !isZeroValue(v.Field(i)) {
return false
}
}
return true
case reflect.Invalid:
return true
}
return false
}

View File

@ -1,47 +0,0 @@
package analytics
import (
"log"
"os"
)
// Instances of types implementing this interface can be used to define where
// the analytics client logs are written.
type Logger interface {
// Analytics clients call this method to log regular messages about the
// operations they perform.
// Messages logged by this method are usually tagged with an `INFO` log
// level in common logging libraries.
Logf(format string, args ...interface{})
// Analytics clients call this method to log errors they encounter while
// sending events to the backend servers.
// Messages logged by this method are usually tagged with an `ERROR` log
// level in common logging libraries.
Errorf(format string, args ...interface{})
}
// This function instantiate an object that statisfies the analytics.Logger
// interface and send logs to standard logger passed as argument.
func StdLogger(logger *log.Logger) Logger {
return stdLogger{
logger: logger,
}
}
type stdLogger struct {
logger *log.Logger
}
func (l stdLogger) Logf(format string, args ...interface{}) {
l.logger.Printf("INFO: "+format, args...)
}
func (l stdLogger) Errorf(format string, args ...interface{}) {
l.logger.Printf("ERROR: "+format, args...)
}
func newDefaultLogger() Logger {
return StdLogger(log.New(os.Stderr, "segment ", log.LstdFlags))
}

View File

@ -1,128 +0,0 @@
package analytics
import (
"encoding/json"
"time"
)
// Values implementing this interface are used by analytics clients to notify
// the application when a message send succeeded or failed.
//
// Callback methods are called by a client's internal goroutines, there are no
// guarantees on which goroutine will trigger the callbacks, the calls can be
// made sequentially or in parallel, the order doesn't depend on the order of
// messages were queued to the client.
//
// Callback methods must return quickly and not cause long blocking operations
// to avoid interferring with the client's internal work flow.
type Callback interface {
// This method is called for every message that was successfully sent to
// the API.
Success(Message)
// This method is called for every message that failed to be sent to the
// API and will be discarded by the client.
Failure(Message, error)
}
// This interface is used to represent analytics objects that can be sent via
// a client.
//
// Types like analytics.Track, analytics.Page, etc... implement this interface
// and therefore can be passed to the analytics.Client.Send method.
type Message interface {
// Validates the internal structure of the message, the method must return
// nil if the message is valid, or an error describing what went wrong.
validate() error
}
// Takes a message id as first argument and returns it, unless it's the zero-
// value, in that case the default id passed as second argument is returned.
func makeMessageId(id string, def string) string {
if len(id) == 0 {
return def
}
return id
}
// Returns the time value passed as first argument, unless it's the zero-value,
// in that case the default value passed as second argument is returned.
func makeTimestamp(t time.Time, def time.Time) time.Time {
if t == (time.Time{}) {
return def
}
return t
}
// This structure represents objects sent to the /v1/batch endpoint. We don't
// export this type because it's only meant to be used internally to send groups
// of messages in one API call.
type batch struct {
MessageId string `json:"messageId"`
SentAt time.Time `json:"sentAt"`
Messages []message `json:"batch"`
Context *Context `json:"context"`
}
type message struct {
msg Message
json []byte
}
func makeMessage(m Message, maxBytes int) (msg message, err error) {
if msg.json, err = json.Marshal(m); err == nil {
if len(msg.json) > maxBytes {
err = ErrMessageTooBig
} else {
msg.msg = m
}
}
return
}
func (m message) MarshalJSON() ([]byte, error) {
return m.json, nil
}
func (m message) size() int {
// The `+ 1` is for the comma that sits between each items of a JSON array.
return len(m.json) + 1
}
type messageQueue struct {
pending []message
bytes int
maxBatchSize int
maxBatchBytes int
}
func (q *messageQueue) push(m message) (b []message) {
if (q.bytes + m.size()) > q.maxBatchBytes {
b = q.flush()
}
if q.pending == nil {
q.pending = make([]message, 0, q.maxBatchSize)
}
q.pending = append(q.pending, m)
q.bytes += len(m.json)
if b == nil && len(q.pending) == q.maxBatchSize {
b = q.flush()
}
return
}
func (q *messageQueue) flush() (msgs []message) {
msgs, q.pending, q.bytes = q.pending, nil, 0
return
}
const (
maxBatchBytes = 500000
maxMessageBytes = 15000
)

View File

@ -1,32 +0,0 @@
package analytics
import "time"
// This type represents object sent in a page call as described in
// https://segment.com/docs/libraries/http/#page
type Page struct {
// This field is exported for serialization purposes and shouldn't be set by
// the application, its value is always overwritten by the library.
Type string `json:"type,omitempty"`
MessageId string `json:"messageId,omitempty"`
AnonymousId string `json:"anonymousId,omitempty"`
UserId string `json:"userId,omitempty"`
Name string `json:"name,omitempty"`
Timestamp time.Time `json:"timestamp,omitempty"`
Context *Context `json:"context,omitempty"`
Properties Properties `json:"properties,omitempty"`
Integrations Integrations `json:"integrations,omitempty"`
}
func (msg Page) validate() error {
if len(msg.UserId) == 0 && len(msg.AnonymousId) == 0 {
return FieldError{
Type: "analytics.Page",
Name: "UserId",
Value: msg.UserId,
}
}
return nil
}

View File

@ -1,117 +0,0 @@
package analytics
// This type is used to represent properties in messages that support it.
// It is a free-form object so the application can set any value it sees fit but
// a few helper method are defined to make it easier to instantiate properties with
// common fields.
// Here's a quick example of how this type is meant to be used:
//
// analytics.Page{
// UserId: "0123456789",
// Properties: analytics.NewProperties()
// .SetRevenue(10.0)
// .SetCurrency("USD"),
// }
//
type Properties map[string]interface{}
func NewProperties() Properties {
return make(Properties, 10)
}
func (p Properties) SetRevenue(revenue float64) Properties {
return p.Set("revenue", revenue)
}
func (p Properties) SetCurrency(currency string) Properties {
return p.Set("currency", currency)
}
func (p Properties) SetValue(value float64) Properties {
return p.Set("value", value)
}
func (p Properties) SetPath(path string) Properties {
return p.Set("path", path)
}
func (p Properties) SetReferrer(referrer string) Properties {
return p.Set("referrer", referrer)
}
func (p Properties) SetTitle(title string) Properties {
return p.Set("title", title)
}
func (p Properties) SetURL(url string) Properties {
return p.Set("url", url)
}
func (p Properties) SetName(name string) Properties {
return p.Set("name", name)
}
func (p Properties) SetCategory(category string) Properties {
return p.Set("category", category)
}
func (p Properties) SetSKU(sku string) Properties {
return p.Set("sku", sku)
}
func (p Properties) SetPrice(price float64) Properties {
return p.Set("price", price)
}
func (p Properties) SetProductId(id string) Properties {
return p.Set("id", id)
}
func (p Properties) SetOrderId(id string) Properties {
return p.Set("orderId", id)
}
func (p Properties) SetTotal(total float64) Properties {
return p.Set("total", total)
}
func (p Properties) SetSubtotal(subtotal float64) Properties {
return p.Set("subtotal", subtotal)
}
func (p Properties) SetShipping(shipping float64) Properties {
return p.Set("shipping", shipping)
}
func (p Properties) SetTax(tax float64) Properties {
return p.Set("tax", tax)
}
func (p Properties) SetDiscount(discount float64) Properties {
return p.Set("discount", discount)
}
func (p Properties) SetCoupon(coupon string) Properties {
return p.Set("coupon", coupon)
}
func (p Properties) SetProducts(products ...Product) Properties {
return p.Set("products", products)
}
func (p Properties) SetRepeat(repeat bool) Properties {
return p.Set("repeat", repeat)
}
func (p Properties) Set(name string, value interface{}) Properties {
p[name] = value
return p
}
// This type represents products in the E-commerce API.
type Product struct {
ID string `json:"id,omitempty"`
SKU string `json:"sky,omitempty"`
Name string `json:"name,omitempty"`
Price float64 `json:"price"`
}

View File

@ -1,32 +0,0 @@
package analytics
import "time"
// This type represents object sent in a screen call as described in
// https://segment.com/docs/libraries/http/#screen
type Screen struct {
// This field is exported for serialization purposes and shouldn't be set by
// the application, its value is always overwritten by the library.
Type string `json:"type,omitempty"`
MessageId string `json:"messageId,omitempty"`
AnonymousId string `json:"anonymousId,omitempty"`
UserId string `json:"userId,omitempty"`
Name string `json:"name,omitempty"`
Timestamp time.Time `json:"timestamp,omitempty"`
Context *Context `json:"context,omitempty"`
Properties Properties `json:"properties,omitempty"`
Integrations Integrations `json:"integrations,omitempty"`
}
func (msg Screen) validate() error {
if len(msg.UserId) == 0 && len(msg.AnonymousId) == 0 {
return FieldError{
Type: "analytics.Screen",
Name: "UserId",
Value: msg.UserId,
}
}
return nil
}

View File

@ -1,16 +0,0 @@
// +build !go1.6
package analytics
import "net/http"
// http clients on versions of go before 1.6 only support timeout if the
// transport implements the `CancelRequest` method.
func supportsTimeout(transport http.RoundTripper) bool {
_, ok := transport.(requestCanceler)
return ok
}
type requestCanceler interface {
CancelRequest(*http.Request)
}

View File

@ -1,10 +0,0 @@
// +build go1.6
package analytics
import "net/http"
// http clients on versions of go after 1.6 always support timeout.
func supportsTimeout(transport http.RoundTripper) bool {
return true
}

View File

@ -1,40 +0,0 @@
package analytics
import "time"
// This type represents object sent in a track call as described in
// https://segment.com/docs/libraries/http/#track
type Track struct {
// This field is exported for serialization purposes and shouldn't be set by
// the application, its value is always overwritten by the library.
Type string `json:"type,omitempty"`
MessageId string `json:"messageId,omitempty"`
AnonymousId string `json:"anonymousId,omitempty"`
UserId string `json:"userId,omitempty"`
Event string `json:"event"`
Timestamp time.Time `json:"timestamp,omitempty"`
Context *Context `json:"context,omitempty"`
Properties Properties `json:"properties,omitempty"`
Integrations Integrations `json:"integrations,omitempty"`
}
func (msg Track) validate() error {
if len(msg.Event) == 0 {
return FieldError{
Type: "analytics.Track",
Name: "Event",
Value: msg.Event,
}
}
if len(msg.UserId) == 0 && len(msg.AnonymousId) == 0 {
return FieldError{
Type: "analytics.Track",
Name: "UserId",
Value: msg.UserId,
}
}
return nil
}

View File

@ -1,89 +0,0 @@
package analytics
import "time"
// This type is used to represent traits in messages that support it.
// It is a free-form object so the application can set any value it sees fit but
// a few helper method are defined to make it easier to instantiate traits with
// common fields.
// Here's a quick example of how this type is meant to be used:
//
// analytics.Identify{
// UserId: "0123456789",
// Traits: analytics.NewTraits()
// .SetFirstName("Luke")
// .SetLastName("Skywalker")
// .Set("Role", "Jedi"),
// }
//
// The specifications can be found at https://segment.com/docs/spec/identify/#traits
type Traits map[string]interface{}
func NewTraits() Traits {
return make(Traits, 10)
}
func (t Traits) SetAddress(address string) Traits {
return t.Set("address", address)
}
func (t Traits) SetAge(age int) Traits {
return t.Set("age", age)
}
func (t Traits) SetAvatar(url string) Traits {
return t.Set("avatar", url)
}
func (t Traits) SetBirthday(date time.Time) Traits {
return t.Set("birthday", date)
}
func (t Traits) SetCreatedAt(date time.Time) Traits {
return t.Set("createdAt", date)
}
func (t Traits) SetDescription(desc string) Traits {
return t.Set("description", desc)
}
func (t Traits) SetEmail(email string) Traits {
return t.Set("email", email)
}
func (t Traits) SetFirstName(firstName string) Traits {
return t.Set("firstName", firstName)
}
func (t Traits) SetGender(gender string) Traits {
return t.Set("gender", gender)
}
func (t Traits) SetLastName(lastName string) Traits {
return t.Set("lastName", lastName)
}
func (t Traits) SetName(name string) Traits {
return t.Set("name", name)
}
func (t Traits) SetPhone(phone string) Traits {
return t.Set("phone", phone)
}
func (t Traits) SetTitle(title string) Traits {
return t.Set("title", title)
}
func (t Traits) SetUsername(username string) Traits {
return t.Set("username", username)
}
func (t Traits) SetWebsite(url string) Traits {
return t.Set("website", url)
}
func (t Traits) Set(field string, value interface{}) Traits {
t[field] = value
return t
}

188
vendor/gopkg.in/yaml.v1/LICENSE generated vendored
View File

@ -1,188 +0,0 @@
Copyright (c) 2011-2014 - Canonical Inc.
This software is licensed under the LGPLv3, included below.
As a special exception to the GNU Lesser General Public License version 3
("LGPL3"), the copyright holders of this Library give you permission to
convey to a third party a Combined Work that links statically or dynamically
to this Library without providing any Minimal Corresponding Source or
Minimal Application Code as set out in 4d or providing the installation
information set out in section 4e, provided that you comply with the other
provisions of LGPL3 and provided that you meet, for the Application the
terms and conditions of the license(s) which apply to the Application.
Except as stated in this special exception, the provisions of LGPL3 will
continue to comply in full to this Library. If you modify this Library, you
may apply this exception to your version of this Library, but you are not
obliged to do so. If you do not wish to do so, delete this exception
statement from your version. This exception does not (and cannot) modify any
license terms which apply to the Application, with which you must still
comply.
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.

View File

@ -1,31 +0,0 @@
The following files were ported to Go from C files of libyaml, and thus
are still covered by their original copyright and license:
apic.go
emitterc.go
parserc.go
readerc.go
scannerc.go
writerc.go
yamlh.go
yamlprivateh.go
Copyright (c) 2006 Kirill Simonov
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

128
vendor/gopkg.in/yaml.v1/README.md generated vendored
View File

@ -1,128 +0,0 @@
# YAML support for the Go language
Introduction
------------
The yaml package enables Go programs to comfortably encode and decode YAML
values. It was developed within [Canonical](https://www.canonical.com) as
part of the [juju](https://juju.ubuntu.com) project, and is based on a
pure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML)
C library to parse and generate YAML data quickly and reliably.
Compatibility
-------------
The yaml package supports most of YAML 1.1 and 1.2, including support for
anchors, tags, map merging, etc. Multi-document unmarshalling is not yet
implemented, and base-60 floats from YAML 1.1 are purposefully not
supported since they're a poor design and are gone in YAML 1.2.
Installation and usage
----------------------
The import path for the package is *gopkg.in/yaml.v1*.
To install it, run:
go get gopkg.in/yaml.v1
API documentation
-----------------
If opened in a browser, the import path itself leads to the API documentation:
* [https://gopkg.in/yaml.v1](https://gopkg.in/yaml.v1)
API stability
-------------
The package API for yaml v1 will remain stable as described in [gopkg.in](https://gopkg.in).
License
-------
The yaml package is licensed under the LGPL with an exception that allows it to be linked statically. Please see the LICENSE file for details.
Example
-------
```Go
package main
import (
"fmt"
"log"
"gopkg.in/yaml.v1"
)
var data = `
a: Easy!
b:
c: 2
d: [3, 4]
`
type T struct {
A string
B struct{C int; D []int ",flow"}
}
func main() {
t := T{}
err := yaml.Unmarshal([]byte(data), &t)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- t:\n%v\n\n", t)
d, err := yaml.Marshal(&t)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- t dump:\n%s\n\n", string(d))
m := make(map[interface{}]interface{})
err = yaml.Unmarshal([]byte(data), &m)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- m:\n%v\n\n", m)
d, err = yaml.Marshal(&m)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- m dump:\n%s\n\n", string(d))
}
```
This example will generate the following output:
```
--- t:
{Easy! {2 [3 4]}}
--- t dump:
a: Easy!
b:
c: 2
d: [3, 4]
--- m:
map[a:Easy! b:map[c:2 d:[3 4]]]
--- m dump:
a: Easy!
b:
c: 2
d:
- 3
- 4
```

Some files were not shown because too many files have changed in this diff Show More