For more information on Blobs in general, please see some of the many articles on
MSDN or the Azure documentation. Some of the core features of the Blob provider are: -
You can easily move between containers, folders and blobs. Simply dotting into a container
or folder will automatically request the children of that node from Azure. This allows
easy exploration of your blob assets, directly from within the REPL. Support exists for both
page and block blobs.
1:
2:
3:
|
let container = Azure.Containers.samples
let theBlob = container.``folder/``.``childFile.txt``
printfn "Blob '%s' is %d bytes big." theBlob.Name (theBlob.Size())
|
Blob 'folder/childFile.txt' is 16 bytes big.
|
You can also perform useful helper actions on folders, such as pulling back all blobs in a folder.
1:
2:
3:
|
let folder = container.``folder2/``
let blobs = folder.ListBlobs(true)
printfn "Folder '%s' has the following blobs: %A" folder.Path blobs
|
Folder 'folder2/' has the following blobs: seq
[folder2/child/descedant4.txt; folder2/child/grandchild1/descedant1.txt;
folder2/child/grandchild1/descedant2.txt;
folder2/child/grandchild2/descedant3.txt]
|
Also note that blobs support hot schema loading to allow schema updates to occur as your storage account contents change.
Individual files, folders and containers share a common base type so list operations are possible e.g.
1:
2:
3:
4:
5:
6:
7:
8:
|
let totalSize =
[ container.``file1.txt``
container.``file2.txt``
container.``file3.txt``
container.``sample.txt`` ]
|> List.sumBy(fun blob -> blob.Size())
printfn "These files take up %d bytes." totalSize
|
These files take up 220 bytes.
|
You can quickly read the contents of a blob synchronously or asynchronously.
1:
2:
3:
4:
5:
6:
7:
8:
9:
|
// sync read
let contents = theBlob.Read()
printfn "sync contents = '%s'" contents
// async read
async {
let! contentsAsync = theBlob.ReadAsync()
printfn "async contents = '%s'" contentsAsync
} |> Async.RunSynchronously
|
sync contents = 'child file stuff'
async contents = 'child file stuff'
|
In addition, the provider has support for custom methods for different document types e.g. XML.
1:
2:
3:
4:
5:
6:
|
let (contentsAsText:string) = container.``data.xml``.Read()
// only available on XML documents
let (contentsAsXml:XDocument) = container.``data.xml``.ReadAsXDocument()
printfn "text output = '%O'" contentsAsText
printfn "xml output = '%O'" contentsAsXml
|
text output = '<data><items><item>thing</item></items></data>'
xml output = '<data>
<items>
<item>thing</item>
</items>
</data>'
|
The provider exposes the ability to easily open a stream to a document for sequential reading.
This is extremely useful for previewing large files etc.
1:
2:
3:
|
let streamReader = container.``sample.txt``.OpenStreamAsText()
while (not streamReader.EndOfStream) do
printfn "LINE: '%s'" (streamReader.ReadLine())
|
LINE: 'the quick brown fox jumps over the lazy dog'
LINE: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras malesuada.'
LINE: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla porttitor.'
|
Again, since files share a common type, you can easily merge multiple sequential streams into one: -
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
|
let lines =
[ container.``file1.txt``
container.``file2.txt``
container.``file3.txt``
container.``sample.txt`` ]
|> Seq.collect(fun file -> file.ReadLines()) // could also use yield! syntax within a seq { }
printfn "starting to read all lines"
for line in lines do
printfn "%s" line
printfn "finished reading all lines"
|
starting to read all lines
stuff
more stuff
even more stuff
the quick brown fox jumps over the lazy dog
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras malesuada.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla porttitor.
finished reading all lines
|
In addition to using the Azure Storage Emulator, you can also simply provide the type provider
with a JSON file containing the list of blob containers, folders and files. This is particularly
useful within the context of a CI process, or when you know a specific "known good" structure of
blobs within a storage account.
You can still access blobs using the compile-time storage connection string if provided, or
override as normal at runtime.
1:
2:
|
type BlobSchema = AzureTypeProvider<blobSchema = "BlobSchema.json">
let fileFromSchema = BlobSchema.Containers.samples.``file3.txt``
|
The contents of BlobSchema.json
looks as follows: -
"{
"samples": {
"file1.txt": { "Type": "blockblob" },
"file2.txt": null,
"file3.txt": { "Type": "pageblob" },
"folder/": {
"childFile.txt": null
},
"folder2/": {
"child/": {
"descendant4.txt": null
}
}
},
"random": {
"file.txt": null,
"folder/": {
"emptyFolder/": null
}
},
"emptyContainer": { }
}"
|
Note that folder names must end with a forward slash e.g. myfolder/
. Also observe that you can
specify the Type
of blob as either pageblob
or blockblob
. If not specified, this defaults
to blockblob
. You can leave "empty" values as either null
or { }
.
There are times when working with blobs (particularly when working with an offline schema) that you
need to access blobs using "stringly typed" access. There are three ways you can do this within the
type provider.
For read access to blobs, you can use the Try... methods that are available on containers and
folders. These asynchronously check if the blob exists, before returning an optional handle to it.
1:
2:
3:
4:
5:
6:
|
let fileAsBlockBlob = container.TryGetBlockBlob "file1.txt" |> Async.RunSynchronously
printfn "Does file1.txt exist as a block blob? %b" (Option.isSome fileAsBlockBlob)
let fileAsPageBlob = container.TryGetPageBlob "file1.txt" |> Async.RunSynchronously
printfn "Does file1.txt exist as a page blob? %b" (Option.isSome fileAsPageBlob)
let doesNotExist = container.TryGetBlockBlob "doesNotExist" |> Async.RunSynchronously
printfn "Does doesNotExist exist as a block blob? %b" (Option.isSome doesNotExist)
|
Does file1.txt exist as a block blob? true
Does file1.txt exist as a page blob? false
Does doesNotExist exist as a block blob? false
|
You can also "unsafely" access a block blob using indexers. This returns a blob reference which may or
may not exist but can be used quickly and easily - especially useful if you want to create a blob that
does not yet exist. However, be aware that any attempts to access a blob that does not exist will throw
an Azure SDK exception.
1:
2:
3:
4:
|
let newBlob = container.["doesNotExist"]
newBlob.AsCloudBlockBlob().UploadText "hello"
printfn "Contents of blob: %s" (newBlob.Read())
newBlob.AsCloudBlockBlob().Delete()
|
Lastly, you can always fall back to the raw .NET Azure SDK (which the type provider sits on top of).
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
|
// Access the 'samples' container using the raw SDK.
let rawContainer = Azure.Containers.samples.AsCloudBlobContainer()
// All blobs can be referred to as an ICloudBlob
let iCloudBlob = Azure.Containers.samples.``file1.txt``.AsICloudBlob()
// Only available to CloudBlockBlobs.
let blockBlob = Azure.Containers.samples.``file1.txt``.AsCloudBlockBlob()
// Only available to PageBlockBlobs.
let pageBlob = Azure.Containers.samples.``pageData.bin``.AsCloudPageBlob()
|
You can quickly and easily download files, folders or entire containers to local disk.
1:
2:
|
// download file1.txt asynchronously into "C:\temp\files"
let asyncFileDownload = container.``file1.txt``.Download(@"C:\temp\files\")
|
The type provider exposes a simple method for generating time-dependant SAS codes for
single files.
1:
2:
3:
4:
5:
|
let duration = TimeSpan.FromMinutes 37.
printfn "Current time: %O" DateTime.UtcNow
printfn "SAS expiry: %O" (DateTime.UtcNow.Add duration)
let sasCode = container.``file1.txt``.GenerateSharedAccessSignature duration
printfn "SAS URI: %O" sasCode
|
Current time: 09/11/2017 11:05:59
SAS expiry: 09/11/2017 11:42:59
SAS URI: http://127.0.0.1:10000/devstoreaccount1/samples/file1.txt?sv=2015-12-11&sr=b&sig=c%2FlQ6WY98onoN54Q6CmMqV8j%2BStWdkFOdT%2Bszc55v%2FA%3D&se=2017-11-09T11:42:59Z&sp=rwdl
|