Documentation
¶
Index ¶
- func GetContentLength(url *url.URL, client *http.Client) (int64, error)
- func GetContentLengthViaGET(url *url.URL, client *http.Client) (int64, error)
- func GetContentLengthViaHEAD(url *url.URL, client *http.Client) (int64, error)
- func NewSeqRangingClient(ranger Ranger, client *http.Client) http.RoundTripper
- func NewSeqReader(client *http.Client, url string, ranger SizedRanger) io.ReadSeekCloser
- type ByteRange
- type Loader
- type LoaderFunc
- type ParallelWriter
- type RangedSource
- type Ranger
- type RangingHTTPClient
- type RemoteReader
- type SizedRanger
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GetContentLength ¶ added in v0.2.0
GetContentLength returns the content length of the given URL, using the given HTTPClient. It first attempts to use the HEAD method, but if that fails, falls back to using the GET method.
func GetContentLengthViaGET ¶ added in v0.2.0
GetContentLengthViaGET returns the content length of the given URL, using the given HTTPClient. It uses a GET request with a zeroed Range header to get the content length.
func GetContentLengthViaHEAD ¶ added in v0.2.0
GetContentLengthViaHEAD returns the content length of the given URL, using the given HTTPClient. It uses a HEAD request to get the content length.
func NewSeqRangingClient ¶ added in v0.6.0
func NewSeqRangingClient(ranger Ranger, client *http.Client) http.RoundTripper
func NewSeqReader ¶ added in v0.6.0
func NewSeqReader(client *http.Client, url string, ranger SizedRanger) io.ReadSeekCloser
NewSeqReader returns a new io.ReadSeekCloser that reads from the given url using the given client. Instead of reading the whole file at once, it reads the file in sequential chunks, using the given ranger to determine the ranges to read. This allows for reading very large files in CDN-cacheable chunks using RANGE GETs.
Types ¶
type ByteRange ¶
ByteRange represents a range of bytes available in a file
func (ByteRange) RangeHeader ¶ added in v0.2.0
RangeHeader returns the HTTP header representation of the byte range, suitable for use in the Range header, as described in https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Range
type Loader ¶
Loader implements a Load method that provides data as byte slice for a given byte range chunk.
`Load` should be safe to call from multiple goroutines.
If err is nil, the returned byte slice must always have exactly as many bytes as was asked for, i.e. `len([]byte)` returned must always be equal to `br.Ranges()`.
func NewSingleFlightLoader ¶ added in v0.4.0
type LoaderFunc ¶
LoaderFunc converts a Load function into a Loader type.
type ParallelWriter ¶ added in v0.5.0
func NewParallelWriter ¶ added in v0.5.0
func NewParallelWriter(length int64, loader Loader, ranger Ranger) *ParallelWriter
type RangedSource ¶
type RangedSource struct {
// contains filtered or unexported fields
}
RangedSource represents a remote file that can be read in chunks using the given loader.
func NewRangedSource ¶
func NewRangedSource(length int64, loader Loader, ranger Ranger) RangedSource
func (RangedSource) Ranges ¶ added in v0.3.0
func (rs RangedSource) Ranges() []ByteRange
func (RangedSource) Reader ¶
func (rs RangedSource) Reader(parallelism int) RemoteReader
Reader returns an io.Reader that reads the data in parallel, using a number of goroutines equal to the given parallelism count. Data is still returned in order. The rangedReadSeekCloser will start reading at the given offset.
type Ranger ¶
type Ranger struct {
// contains filtered or unexported fields
}
Ranger can split a file into chunks of a given size.
func NewRanger ¶
NewRanger creates a new Ranger with the given chunk size. If the chunk size is <= 0, the default chunk size is used.
type RangingHTTPClient ¶
type RangingHTTPClient struct {
// contains filtered or unexported fields
}
RangingHTTPClient wraps another HTTP client to issue all requests in pre-defined chunks.
func NewRangingClient ¶ added in v0.3.0
func NewRangingClient(ranger Ranger, client *http.Client, parallelism int) *RangingHTTPClient
NewRangingClient wraps and uses the given http.Client to make requests only for chunks designated by the given Ranger, but does so in parallel with the given number of goroutines. This is useful for downloading large files from cache-friendly sources in manageable chunks, with the added speed benefits of parallelism.
type RemoteReader ¶ added in v0.3.0
type SizedRanger ¶ added in v0.6.0
type SizedRanger struct {
// contains filtered or unexported fields
}
func NewSizedRanger ¶ added in v0.6.0
func NewSizedRanger(length int64, ranger Ranger) SizedRanger
func (SizedRanger) At ¶ added in v0.6.0
func (r SizedRanger) At(offset int64) ByteRange
func (SizedRanger) Length ¶ added in v0.6.0
func (r SizedRanger) Length() int64