Vereign Community - Latest posts https://community.vereign.com Latest posts TOL: Support for did:ipid Thinking Out Loud

Similar to the thoughts voiced in TOL: Support for did:sidetree, we should also consider adding support for did:ipid.

https://did-ipid.github.io/ipid-did-method/

The rationale is somewhat similar: We are already using IPFS for a couple of reasons, including for our SEAL product. In fact, one might even consider SEAL a potentially related method, as it stores all its additional information in IPFS, as well.

So did:ipid fits very well into the overall picture, is quite useful for a lot of the simpler use cases, and can be self-hosted very easily (see conversation about “Vereign SSI Node”).

It seems like a good addition to the mix, and something that might also be rather useful in the context of Gaia-X, as it works without a ledger.

]]>
https://community.vereign.com/t/tol-support-for-did-ipid/421#post_1 Tue, 25 Oct 2022 09:18:29 +0000 community.vereign.com-post-939
TOL: Support for did:sidetree Thinking Out Loud

did:sidetree relies heavily on IPFS, which we are already using a lot, and has the potential to add SSI capabilities to any kind of blockchain / ledger ecosystem with comparatively limited effort.

https://identity.foundation/sidetree/spec/

This makes it strategically valuable beyond its did:ion implementation, for which we also would want did:sidetree support, properly encapsulated, so that we can re-use the implementation with any ledger.

Ledgers that could benefit from proper did:sidetree support would be

  • Corda
  • æternity
  • as well as any other ledger not currently having a native SSI implementation/stack

Components

At first glance it seems did:sidetree should definitely be added to Organizational Credential Manager & Trust Services API. It might also be relevant to the Personal Credential Manager.

And we should understand how to properly encapsulate its implementation so that multiple did: methods like did:ion, did:æ and did:whatever that are all did:sidetree based would be able to re-use the same code paths.

Questions

Does this make sense? Is there perhaps a more valuable target we are missing?

Would this perhaps also be of interest for our Gaia-X Federation Services (GXFS) collaboration and partnership?

]]>
https://community.vereign.com/t/tol-support-for-did-sidetree/420#post_1 Mon, 24 Oct 2022 13:03:47 +0000 community.vereign.com-post-938
Yapeal Timestamped Document with History Sourcecode of the timestamping service creating the pkcs7

https://drive.google.com/file/d/1A2jLdZl1X7z8WPektWZhYSyUK1xCONy0/view?usp=sharing

]]>
https://community.vereign.com/t/yapeal-timestamped-document-with-history/407#post_3 Tue, 17 May 2022 13:27:04 +0000 community.vereign.com-post-931
Use of vereign for gmail through outlook Hello Phantom,

Sadly it is not possible to send signed emails with Vereign with your Gmail email address via Outlook.
To access add-ins for Outlook, you must be running Exchange or Office 365 on the backend. That means you can’t tap into them for your own personal account with Gmail, Yahoo, or another such service.

Best Regards,
Rosen Georgiev

]]>
https://community.vereign.com/t/use-of-vereign-for-gmail-through-outlook/413#post_2 Tue, 17 May 2022 13:15:14 +0000 community.vereign.com-post-930
Use of vereign for gmail through outlook Hi,

Is there a way I can use Vereign through my outlook for my gmail account also? I have a microsoft account and can use that but was wondering if it is possible or if you guys are working on it to make that a possibility too.

Many Thanks

]]>
https://community.vereign.com/t/use-of-vereign-for-gmail-through-outlook/413#post_1 Tue, 17 May 2022 08:57:42 +0000 community.vereign.com-post-929
Yapeal Timestamped Document with History @yapmkes can you give me access my email is [email protected]. I’ve sent a request.

]]>
https://community.vereign.com/t/yapeal-timestamped-document-with-history/407#post_2 Thu, 28 Apr 2022 09:15:33 +0000 community.vereign.com-post-925
Yapeal Timestamped Document with History Can you Check the Timestamped document? Maybe you can see why adobe reader is telling document has been altered.

I do have a zip file with the request with hash, response, pdf pkcs7.der, and analysis texts.

https://drive.google.com/file/d/1zVo2GlB51p3XIkLw73xyjlp38gUNRrwC/view?usp=sharing

]]>
https://community.vereign.com/t/yapeal-timestamped-document-with-history/407#post_1 Mon, 11 Apr 2022 09:25:16 +0000 community.vereign.com-post-920
Input sought: Story board idea for a "how does SEAL work" short explainer video
georg.greve:

Does this help you understand how Vereign SEAL works?

Yes, it shows the process in understandable way.

Yes, animated video will illustrate it even better.

Nothing is too much, I guess.
For the missing part, I am thinking about some very short background story, perhaps just message, that should be emotional - people react to emotions way more than logic. It could be something funny or something that will trigger a pleasurable thought. And, yeah, if it is related to the blockchain it would be even better. So I will continue with the thinking…

]]>
https://community.vereign.com/t/input-sought-story-board-idea-for-a-how-does-seal-work-short-explainer-video/404#post_4 Thu, 07 Apr 2022 12:27:46 +0000 community.vereign.com-post-916
Input sought: Story board idea for a "how does SEAL work" short explainer video
georgi.michev:

Lovely idea!

Glad you like it. :slight_smile:

You might start with the questions at the end: :wink:

]]>
https://community.vereign.com/t/input-sought-story-board-idea-for-a-how-does-seal-work-short-explainer-video/404#post_3 Wed, 06 Apr 2022 19:26:49 +0000 community.vereign.com-post-906
Input sought: Story board idea for a "how does SEAL work" short explainer video Lovely idea!
What can I do to help?

]]>
https://community.vereign.com/t/input-sought-story-board-idea-for-a-how-does-seal-work-short-explainer-video/404#post_2 Wed, 06 Apr 2022 16:22:09 +0000 community.vereign.com-post-905
Input sought: Story board idea for a "how does SEAL work" short explainer video Dear Vereign Community,

We regularly have people who let us know they would like to know how Vereign SEAL works. Especially in an age of so many products who make promises they cannot keep and with something that looks and feels so “magical” in its operation, that’s easily understood.

So we were thinking to perhaps make a small explainer video that might be added to the web page and shared over social media.

As part of this, I’ve been trying to sketch very roughly my idea.

All done quick and dirty in 30 minutes, just to outline the idea.

Just imagine this animated, with an “IKEA manual” feel to it, with a simple voice over in English (and perhaps 2-3 other languages) plus subtitles that can be translated easily:

Main story board


…and if we also want to add the blockchain verification

Questions

So here are my questions to you:

  • Does this help you understand how Vereign SEAL works?
  • Do you think this kind of video would make it easier for others?
  • What is missing, what is perhaps too much?
  • What would you improve?

and, of course:

  • Do you perhaps have a different, much better, idea?

]]>
https://community.vereign.com/t/input-sought-story-board-idea-for-a-how-does-seal-work-short-explainer-video/404#post_1 Wed, 06 Apr 2022 11:30:06 +0000 community.vereign.com-post-903
Updating the visual SEAL Unfortunately @giles.vincent came up with the square one because he said that making the text smaller will not dramatically change the size, which is primarily due to the round shape. He wanted to show us a version to demonstrate, but is on vacation this week, so I don’t think we will get much from him this week.

In my view, we can only go with round if we EITHER

  • also use round under email, which is a difficult form factor

OR

  • we decide we can live with two different kinds of seals for different kinds of objects.

Which to me makes this a difficult call.

But yes, I also think the round looks better. :roll_eyes:

]]>
https://community.vereign.com/t/updating-the-visual-seal/402#post_6 Mon, 04 Apr 2022 17:23:52 +0000 community.vereign.com-post-900
Updating the visual SEAL For me the new round design looks better, the only the SEAL OF DIGITAL ORIGINAL and SCAN TO VERIFY should be wayy smaller, because on real pdf it takes too much space.

]]>
https://community.vereign.com/t/updating-the-visual-seal/402#post_5 Mon, 04 Apr 2022 11:17:07 +0000 community.vereign.com-post-898
Updating the visual SEAL For me it whould help to see mockups, for the different options. So actually see the different design in action (/applied to a document or email).

Overall I like the round design better for Documents. But I think we might want to ultimatly aim at one unified design for both emails and documents, so in case the round design (for whatever reason) would not work for emails, that would be a downside to be considered for choosing the round design.

Also I am not sure about the ratio between necessary size and QR Code Data. Or in other words: I am not sure how big the round QR Code must be, to contain the necessary number of characters. Hence also for this question it whould help to see the actual design applied to a document also considering the necessary size.

]]>
https://community.vereign.com/t/updating-the-visual-seal/402#post_4 Wed, 30 Mar 2022 12:48:41 +0000 community.vereign.com-post-896
Updating the visual SEAL Here is an alternative version provided by @giles.vincent - which is square and thus more space efficient:

thumbnail_Square-SEAL-Preview thumbnail_Square-SEAL-Teal-Preview

thumbnail_Square-SEAL thumbnail_Square-SEAL-Teal

Thoughts?

]]>
https://community.vereign.com/t/updating-the-visual-seal/402#post_3 Mon, 28 Mar 2022 19:35:47 +0000 community.vereign.com-post-895
Updating the visual SEAL I like the version with less text (e.g. without vereign).

The other one is harder for me to read.

]]>
https://community.vereign.com/t/updating-the-visual-seal/402#post_2 Fri, 25 Mar 2022 10:29:30 +0000 community.vereign.com-post-892
Updating the visual SEAL The SEAL is central to the user experience in email, and soon we’ll have it also for documents.

While the user experience and visual language of the main web site, the web verification app, and the add-ins have evolved substantially over the past 12 months, the SEAL itself has been largely the same. Given that we needed to implement this afresh for documents, there was now a window of opportunity to take another look.

Together with @boyan.tsolov I’ve embarked on a little journey, where we started from the idea of an actual physical seal in wax, as well as making it a little more abstract. Here are some drafts that Boyan had a friend of his put into graphics:

as well as trying to make things a little more abstract

Given this needs to work on documents, and survive printing out without getting too ugly, we decided to work a bit more in the direction of the abstract version, which ended up with this:

This is the version I then ended up with @giles.vincent to get his take on it. Which resulted in… :drum:

thumbnail_SEAL-Example-SD

as well as

SEAL-Example-SD-v2

Which I am personally very happy with.

Inputs sought

  • @claus.bressmer and myself feel, the “Vereign” in front of “Vereign Seal of Digital Original” is a tad too much. Which is why I’ve asked @zdravko to go with the second one for now. Very little text, to the point, reads immediately… in my book that is a big plus.

    @giles.vincent however seems to feel it would be better to have more brand visibility here, beyond the checkmark and the border.

    What do you think? Any strong feelings on the subject matter?

  • Now that we have seen this… should we consider to also update the visual seal for emails?

    It could read “SEALED MESSAGE” on top and “CLICK OR SCAN TO VERIFY” on bottom.

    If we feel we need more context / help we could do a small text underneath “To learn more about what SEALed messages, please visit URL” or similar.

Thoughts?

giphy

]]>
https://community.vereign.com/t/updating-the-visual-seal/402#post_1 Fri, 25 Mar 2022 10:21:45 +0000 community.vereign.com-post-891
Adding decentralised storage feature to Vereign SEAL
luben:

Thank you very much, this is all very helpful and I’ll use it in my node.

Wonderful. :heart_eyes:

Yes.

See start of the thread. Our add-ins should bundle

unless there is a better alternative that is more suitable to our goals – if you look at Brave, they are bundling

into the browser itself, including some default configuration.

That is exactly what we should be doing, including providing a sane default configuration to make IPFS work well for our use case. Because then we know we have a sane IPFS with sensible default locally.

I see. But I would also guess that IPFS does this by default.

So I would first want to verify whether this is already happening, so we don’t do it twice. Inexpensive or not, duplication of effort seems pointless.

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_14 Wed, 13 Oct 2021 08:10:29 +0000 community.vereign.com-post-698
Adding decentralised storage feature to Vereign SEAL Thank you very much, this is all very helpful and I’ll use it in my node.

About the local nodes, are we expecting the users to manually run a local node in the browser or we want to embed a node in the extension, so that every user will have an IPFS node automatically?

As for the issue with the CID, I agree that hash collisions is not a problem we should discuss.

What I wrote intended to illustrate that generating the CID at any point in time, by anyone who has the content, is a relatively cheap (almost instant) operation.

And the example was that clients should (and I’m sure in the IPFS implementations they do by default) verify the content they receive by hashing it after it arrives. It’s an open network and no one can know what a piece of software on a remote computer, will send back in response to a /get/cid request. The only way for the client to know if the content is valid is to hash it and compare it against the CID it has requested in the first place.

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_13 Tue, 12 Oct 2021 20:44:13 +0000 community.vereign.com-post-696
Adding decentralised storage feature to Vereign SEAL
luben:

I still can’t understand how they do it with the local node and I’ll try to find more info on the internet. Unfortunately I also cannot find the forum post or internet page where I read about the need to expose ipfs node’s 4001 to the internet in order for other nodes to accept content announcements. I’ll post it here if I manage to find it again.

IPFS has a mix of nodes that have open ports, these are the ones that can be discovered, and those that do not. The ones that do not have ports that can be discovered will connect to the ones that have ports which can be discovered and they will all sync data amongst them.

But the data path can be convoluted and complex if you run a local node that does not have an exposed port, given that it will opportunistically connect to nodes that have exposed ports based on what it finds – and if someone is looking for data on it, that request needs to reach a node that just happens to be connected to the one that you are hosting without exposed port.

In my experience it can take a while for data to become available on any random node if you run a node without exposed ports. Unless you have stable / persistent peering set up for your local node, that is.

It always shows up eventually but the time cannot be planned.

But if you set up constant peering, e.g. to Cloudflare, things get really fast. Ideally you always maintain constant peering to the nodes you would be using for data retrieval or pinning… as those are then just one hop away from the data you are looking for.

You had your IPFS node set up to connect to the global IPFS network (not a private network, and not network limited) and enabled constant peering / connection to Cloudflare and ipfs.io?

That is useful, but begs the question: Why take on the hard task of asynchronous data synchronisation including things like disconnects and resumes when IPFS is dedicated and optimised for that task?

Not to mention: IPFS also provides a local caching layer for this kind of data when connectivity is (temporarily) interrupted, allowing us to process data and show results even while offline or with limited connectivity.

Ultimately it is about using the right tool in the right way. The right way of using IPFS is via distributed nodes for the clients, and larger, port accessible nodes for sharing, distribution and persistence/pinning.

This is also a matter of redundancy and resilience. By using the local IPFS nodes, mail sending and sealing will always work as expected, even if the Vereign IPFS service is temporarily having issues, and no data would be lost. And once we are moving to Mailtree, we should be able to build the entire system to no longer have single points of failure.

Uhm… isn’t that the whole point of Content Addressable Storage?

The only way you could realistically expect that data NOT to be the data we expect would be if there was a hash collision. Which seems pretty unlikely

And if sha256 were to be compromised, IPFS could update seamlessly:

So if it has the correct address (= the correct hash) then it is with near certainty the right file.

Yes.

Here is another one, served straight from my Brave browsers local IPFS node:

https://cloudflare-ipfs.com/ipfs/QmS4vvhsVvkubbyKq8p5YiPwRHFciby5hPeMgiZeuTN9Sg?filename=facebook-down.png

See how it actually got cached and even got filled into this thread?

Here is my peering section right now:

	"Peering": {
	"Peers": [
		{
			"Addrs": [
				"/ip6/2606:4700:60::6/tcp/4009",
				"/ip4/172.65.0.13/tcp/4009"
			],
			"ID": "QmcfgsJsMtx6qJb74akCw1M24X1zFwgGo11h1cuhwQjtJP"
		},
		{
			"Addrs": [
				"/dns/cluster0.fsn.dwebops.pub"
			],
			"ID": "QmUEMvxS2e7iDrereVYc5SWPauXPyNwxcy9BXZrC1QTcHE"
		},
		{
			"Addrs": [
				"/dns/cluster1.fsn.dwebops.pub"
			],
			"ID": "QmNSYxZAiJHeLdkBg38roksAR9So7Y5eojks1yjEcUtZ7i"
		},
		{
			"Addrs": [
				"/dns/cluster2.fsn.dwebops.pub"
			],
			"ID": "QmUd6zHcbkbcs7SMxwLs48qZVX3vpcM8errYS7xEczwRMA"
		},
		{
			"Addrs": [
				"/dns/cluster3.fsn.dwebops.pub"
			],
			"ID": "QmbVWZQhCGrS7DhgLqWbgvdmKN7JueKCREVanfnVpgyq8x"
		},
		{
			"Addrs": [
				"/dns/cluster4.fsn.dwebops.pub"
			],
			"ID": "QmdnXwLrC8p1ueiq2Qya8joNvk3TVVDAut7PrikmZwubtR"
		},
		{
			"Addrs": [
				"/dns4/nft-storage-am6.nft.dwebops.net/tcp/18402"
			],
			"ID": "12D3KooWCRscMgHgEo3ojm8ovzheydpvTEqsDtq7Vby38cMHrYjt"
		},
		{
			"Addrs": [
				"/dns4/nft-storage-dc13.nft.dwebops.net/tcp/18402"
			],
			"ID": "12D3KooWQtpvNvUYFzAo1cRYkydgk15JrMSHp6B6oujqgYSnvsVm"
		},
		{
			"Addrs": [
				"/dns4/nft-storage-sv15.nft.dwebops.net/tcp/18402"
			],
			"ID": "12D3KooWQcgCwNCTYkyLXXQSZuL5ry1TzpM8PRe9dKddfsk1BxXZ"
		},
		{
			"Addrs": [
				"/ip4/104.210.43.77"
			],
			"ID": "QmR69wtWUMm1TWnmuD4JqC1TWLZcc8iR2KrTenfZZbiztd"
		}
	]
},

and also I have set

	"Reprovider": {
	"Interval": "5m",
	"Strategy": "all"
},

so it re-announces the availability of data every 5 minutes, and not just every 12 hours.

See

for more information.

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_12 Tue, 12 Oct 2021 17:03:45 +0000 community.vereign.com-post-695
Adding decentralised storage feature to Vereign SEAL Ok, sounds good, let’s try it and see if it can work!

I still can’t understand how they do it with the local node and I’ll try to find more info on the internet. Unfortunately I also cannot find the forum post or internet page where I read about the need to expose ipfs node’s 4001 to the internet in order for other nodes to accept content announcements. I’ll post it here if I manage to find it again.

My assumptions that the node should act as a server with 4001 open come from the post that I read, plus my experiments with the local node I have. I upload content, announce that content to the network and later I was not able to fetch the content from anywhere on the network except my own node - neither Cloudfrare nor ipfs.io worked to return the data after I waited and retried many times.

I think we don’t need to wait for uploading the content to know what the CID is. There are JS (and Go too) libraries which can calculate the CID before any uploading is initiated just by hashing the content. For a 10-20 megabytes attachment that should be some hundred milliseconds operation on the client side, so I assume it won’t be a problem.

I think we can make the client calculate the CID, put it in the SEAL and later upload the content. I think this is also how nodes validate that they have received the correct content, because the network is untrusted (or let’s say trustless). When we request content by CID from other nodes, we don’t know if the bytes which they return are the correct content that we need, so the client calculates the CID from the received bytes to validate that they match the CID that it has requested. So this operation should be relatively cheap to perform and independent of the uploading itself.

Are you sure that you have not uploaded a file which happens to also be uploaded by some other node in the network, and that’s the reason that you were able to fetch it? Is it a truly unique PDF file which only you have?

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_11 Tue, 12 Oct 2021 13:13:01 +0000 community.vereign.com-post-694
Adding decentralised storage feature to Vereign SEAL Hi @luben - no worries.

Discussing these things to make sure we’re all on the same page is the reason for this forum.

Excellent.

See the links I shared yesterday.

From what I understand, those pinning services work via a RESTful API over which you submit the CID/hash of what you want them to PIN. They then request that data, and pin it for you, accounting the storage required to your account with them via the authentication token you need to submit such a request.

The node needs to have the data to pin it, yes. But it can easily get the data by requesting it, following by a PIN operation on that data. In fact you can find just that information in the documentation shared yesterday:

https://ipfs.github.io/pinning-services-api-spec/#section/The-pin-lifecycle/Checking-status-of-in-progress-pinning

Yes, if we ourselves wanted to also use a 3rd party pinning service, we would have to pay for that. In that case we’d have to price this in. But of course our USERS might choose to use additional / 3rd party pinning services for which they would be happy to pay themselves and it would do nothing to impact the functioning of our system.

That is evidently false, since this would otherwise not have worked:

My browser is behind two firewalls, has no open ports, nor a public IP address, yet it works perfectly serving data up to the IPFS network.

That is a StackOverflow post by someone from 2016 about HTTP in general and does not seem related to IPFS, at all. I fail to see how this would be relevant to what we’re discussing?

IPFS uses peering with publicly nodes for its distribution method, and keeps those channels open for a while, refreshing them occasionally, but when told to do so, can also maintain permanent connections to other nodes, i.e. Cloudflare, and can receive requests over those connections.

In fact, the whole premise/idea of IPFS is to have nodes distributed across all kinds of devices and browsers in order to allow peering and distribution via IPFS to any application, anywhere. That is why you have apps that create IPFS nodes for mobile phones - which also don’t have public IP addresses, typically.

Using IPFS properly means storing/retrieving data LOCALLY.

The benefits are in speed of operation, independence from current bandwidth situation, implicit caching of data, reduction of transmitting the same things repeatedly and so on and so forth.

Please see again the original post, specifically the Considerations for integration.

We should ALWAYS prefer local first, in an order of

  • same network (if configured)
  • same device (if configured)
  • same application (default)

and then fall back on the service you developed as the last resort only.

The only exception to this is when the data is already in the cloud because we do not want to download in order to then store into IPFS, which will then upload it again.

It does not require configuration, except for potential speed gains.

There are always plenty of nodes happy to be connected to you. My browser has hundreds of nodes connected world wide right now. As to the likelihood of Cloudflare as a business no longer wanting to participate in the IPFS community, I guess that is possible. But it also seems rather unlikely, especially their moves towards offering storage now:

This is a near perfect permanent storage layer for IPFS, and Cloudflare’s business is as a CDN.

So it would seem odd that they would suddenly stop distributing IPFS when this is where a lot of the innovation is happening and they themselves have been pretty early and involved in this, from the looks of it.

But even if they did: It would not invalidate any of the things discussed here.

Once data is in IPFS, we can pin it anywhere, our clients will always be able to retrieve it from IPFS over their local nodes, and the Vereign IPFS service will keep it pinned for as long as required. And even if we were to use Cloudflare for our own pinning service, then we could switch to another one, or build one ourselves should they ever become either hostile or no longer willing to support IPFS.

That’s the beauty of a heterogeneous, growing ecosystem of providers.

I would start this all by:

  • having a Vereign IPFS node and pinning service
    • this node should be configured to maintain permanent connections to Cloudflare and ipfs.io, at ieast, perhaps some others as well. This is pure configuration, so easily updated
  • having all clients incorporate local IPFS nodes configured to permanently peer with
    • the Vereign IPFS node
    • the Cloudflare IPFS service

and then have local clients request data in parallel from

  • local node
  • Cloudflare

and only fall back on the Vereign IPFS node if neither responds in a reasonable time. That way we protect the bandwidth in the data centre, and use Cloudflare as much as possible.

You are right that for this first step of feature development, local nodes are less useful for data retrieval, at least in the first iteration.

But please keep in mind that this is only the first step and our chance to try out a core component of what we want to do with Mailtree where ttachments are not the only things that will go into IPFS, the data required to verify the seals, and the read receipts will also be in IPFS.

So this is our chance to experiment with local IPFS node integration and usage so we don’t need to take that step once we move to Mailtree and will have a lot more moving parts.

As for the web verification app: Since it is not a persistent application used repeatedly, adding an IPFS node does not seem to add any benefit right now, I agree. Here I would probably default to Cloudflare, and fall back on the Vereign IPFS service.

You need the IPFS CID in order to generate the SEAL.

So there is no way to generate the Seal first and wait for attachments to be asynchronously uploaded afterwards / while you are sending the mail. You are always blocked on upload to IPFS.

Which is why writing speed for attachments is crucial for the user experience. Local write will always be faster than network. And IPFS has the special property that we are getting the correct, permanent CID immediately so we can generate the SEAL right away and send in a matter of seconds – regardless of whether the attachments have already been uploaded / synchronised.

Does this all make more sense now?

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_10 Tue, 12 Oct 2021 11:15:27 +0000 community.vereign.com-post-693
Adding decentralised storage feature to Vereign SEAL Hi Georg!

I don’t want to sound contrarian or negative and I believe that the discussions and questions that you’re raising are immensely valuable for all of us to understand how to implement and think about these features and technologies.

I just want to share what’s bothering me with the IPFS implementation and my current understanding of the IPFS network and its capabilities. The writings that will follow may be partially (or completely) wrong, but that’s why we discuss and learn things :slight_smile:

I’ll try to write answers by quoting different parts of your post.

Our current API has pinning enabled (forgot to mention it yesterday) and when content is uploaded to the IPFS service, it’s also pinned to our IPFS node (currently pinning is without expiration). As of now the API doesn’t have endpoint to UNPIN content, but it will be very easy to add it whenever we need.

As far as I understand you can only pin content to nodes where you’re effectively uploading the content. So pinning goes hand-in-hand with uploading data. We cannot just pin data to a node without uploading the data to that node. So in order for anyone to pin content to a 3rd party service, it means he’ll have to somehow authenticate and be authorized to upload, which is a paid-for service. After the client/user/business has entered into an agreement with a 3rd party to upload and store content there, then content upload happens in the same way as we’re now uploading content to our IPFS nodes.

We can re-upload and pin content from our IPFS nodes to a 3rd party node for redundancy, but we’ll have to pay their storage price. As far as I see it, this can only be achieved service-2-service. I don’t see how browser nodes can authenticate against a 3rd party securely and upload data there directly, because if authentication keys are in the browser, they are effectively not confidential. So pinning and uploading from a browser extension to a 3rd party node can only happen by proxying the data through some backend services - which means that we can upload the data both to our IPFS nodes and 3rd party nodes (which effectively means that the browser only uploads data to our IPFS service).

It would be good if we can still make the uploading of content to our IPFS nodes async and send the email quickly, without using a local IPFS node, because for the moment I can’t see the benefits of having a local IPFS node in the extensions or the browser, because effectively this node is not reachable from the internet. From what I understand, a local node cannot be a peer to the other nodes of the IPFS network, because they cannot initiate a request back to it when they want to fetch content: javascript - Listen to http request on webpage - Stack Overflow

I mentioned yesterday, that if an IPFS node wants to advertise to the network that it has some content, it has to be reachable from the internet. Nodes that receive the announcement request will try to open a connection back to the advertising node and if they can’t connect, they will not create a routing record for that content (the announcement will be ignored). I assume this is what will happen with the IPFS browser nodes - the content that they have will be useless and unreachable. I may be wrong about that, but this is what I’ve found so far.

This seems to work on a good will basis, because the fact that you configured your node to open long-lived TCP connections to other nodes and public IPFS gateways, doesn’t mean that they will honor your requests. They will frequently recycle/refuse/drop connections in order to operate their service more efficiently and we cannot guarantee that we’ll have a stable connection with these providers. We can configure our nodes to try to make these connections, but it’s up to the other party to accept and support a long-lived connection. Even though these configuration options may be useful for more efficient content discovery and routing.

Here I’m a little bit lost on the meaning of “locally”. I imagine the following scenarios:

  1. A recipient of the email has direct access to the attachments in the email, so he doesn’t need to download them - so here we have nothing to do with IPFS.

  2. The web verification app must download the attachments from IPFS.

If we assume that the user’s browser that opens the verfication app has a built-in local IPFS node, I can’t see how the attachments will ever be there in this local node. It seems to me that looking for the data locally will never yield results. The web app will always have to fetch the data from a public IPFS service or our own IPFS service.

Next the web app can start concurrent requests to public IPFS services and our IPFS service, but this also doesn’t seem to make sense, because our IPFS service will respond immediately with the content, while public gateways will have to find the route to our node, and fetch the data from there and restream it. I suppose it will always be orders of magnitute faster to get the data directly from our service, because our service is representing the only node(s) on the IPFS which has the content. Even in cases where the public gateway has a direct record and knows that it must fetch the data from us instantly, this will still be an operation that is placed in a queue and effectively restreaming the content from us. So I think, we streaming the data directly to the client will always be faster than, we streaming the data to another node which will then stream it to the client.

To wrap up: The IPFS network as I currently see it doesn’t have any incentives for anyone to store anyone else’s content. And in general nobody stores the content of nobody else. It so happens that if multiple parties/businesses/people store the same information, like for example a huge public dataset, or the internet archive, or some other valuable public information, then this distributed information is redundantly dispersed and can be retrieved and exchanged between clients and servers more efficiently. This as I understand is the purpose and strength of IPFS - multiple parties without any coordination hosting the same content, makes the access to the content more efficient and at the same each of them has a copy, which increases the redundancy.

However, for specific information of a company, without any structure because everything is encrypted, no one in the network will store even a single byte of our content, unless we pay them to and even when we pay them to, our clients will still have to go through our backend services for authentication.

We can use a local in-browser node to experiment with lazy syncing from a client to our nodes, but besides that use case, for the moment I cannot see what else we can do with a browser node. And if this turns out to be the case, it will be best if we can async upload the data without using a local IPFS node.

Please excuse me for the cold shower thoughts on this topic, but it’s how I currently see the IPFS stuff. If I’m wrong, I’d happily change and evolve my understanding :zipper_mouth_face:

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_9 Tue, 12 Oct 2021 10:23:28 +0000 community.vereign.com-post-692
Adding decentralised storage feature to Vereign SEAL
georg.greve:

Peering for Performance

If we want to host our own IPFS service, or use Cloudflare (or similar) for this function, we want data to be available as quickly as possible.

By the way: I tried this out with the IPFS node in my Brave browser, peering it against Cloudflare as provided in the example.

Uploaded a PDF, got its share link (https://ipfs.io/ipfs/XXX) and then accessed it via https://cloudflare-ipfs.com/ipfs/XXX. The PDF was new, freshly uploaded, but my other browser (Google Chrome) pulled its data in a second or two. It wasn’t noticeably slower than normal web pages.

Promising.

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_8 Tue, 12 Oct 2021 08:47:51 +0000 community.vereign.com-post-691
Adding decentralised storage feature to Vereign SEAL Hi @luben - thank you for this, this is great progress!

Some questions and remarks from my side:

PIN as part of API

We all know that performance is going to be critical. Which is why the initial outline assumed we would write data locally on the device sealing the message. Of course right now our add-ins do not have IPFS yet, but the way this SHOULD work is to write locally, and then authenticate toward our service to request to PIN (= synchronise, provide and make permanent) this information.

So I would expect our API to also have operations for

  • PIN
  • UNPIN

where PIN should likely have a time component to it, e.g. “PIN for 10 years” which should trigger the fetching of that information and subsequent pinning. IPFS already has ready-made components for this, in fact there are commercial and free pinning services available right now. See

and

for more information.

Since it is a resource costly operation, all these pinning services typically require access keys, which they then use to map requests to accounts, which have built-in accounting based on volume.

Thought on PINNING

We might even consider using more than one pinning service for redundancy or allow people to select their own pinning service preferences as additional features.

Performance of LOCAL + PIN vs UPLOAD and PIN

LOCAL + PIN makes sense where files are on a local device that has “imperfect” bandwidth because otherwise sending would involve waiting for all the uploads to be finished. In these situations, LOCAL + PIN allows us to send right away, and “lazy sync” after the mail has been sent.

But where data is already in the cloud because it has been uploaded during drafting stage, or because it was attached as a link to cloud data, pulling it down to then write it locally only to then synchronise it back to IPFS again makes no sense.

So where such data is already in the cloud, we should use the API to transfer “cloud to cloud” in order to not depend on poor local bandwidth.

API spec might need “PIN time” argument

Like for the “PIN” operation, the “UPLOAD” operation likely also needs a “and pin” or a “and pin for time period X” argument.

Peering for Performance

If we want to host our own IPFS service, or use Cloudflare (or similar) for this function, we want data to be available as quickly as possible.

IPFS has a notion of peering, which basically means a constant connection is kept between the local node and another node that we know has data we are interested in. This is a configuration item, see

All our clients should keep permanent peering with all the IPFS nodes we know to hold SEAl data, I believe, be it our own, and/or one we operate via Cloudflare.

Performance: Look local + Download

When accessing data, we should always have parallel requests to get the data we are looking for locally, as well as downloading it from the “IPFS cloud service.”

If local comes back right away, we can already use that data and can abort the download operation.

If local does not have it, we need to wait for the download to finish.

But the request to local will likely also trigger a sync to local so we can re-use the data on when needing it again, which is not uncommon: If you’ve looked at this mail today, chances are you will have another look in the next 7 days or so. By parallel request with a “first winner” approach we can use IPFS as a dynamic cache for data we are likely to need again.


So far from my side. I hope all of this makes sense.

If you have questions, you know where to find me. :smiley:

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_7 Mon, 11 Oct 2021 16:42:30 +0000 community.vereign.com-post-690
Adding decentralised storage feature to Vereign SEAL Hello, I’m writing a summary of what we’ve done so far for the IPFS storage feature.

Design

The following picture presents a high-level overview of the IPFS architecture.

IPFS Service

It is a Go backend service that is processing all client requests. It exposes a thin API layer above the IPFS API and clients communicate directly with this service only when uploading content. The API of the IPFS cluster is not visible from the internet and can only be accessed by our internal backend services.

This front-facing IPFS service enables us to:

  • authenticate requests
  • implement business logic, like for example automatically announcing the content that has been uploaded
  • provide more sensible log entries and responses related to our business logic
  • validate and rate limit requests
  • attach metrics and tracing to IPFS operations
  • decouple and hide all implementation details of the IPFS API and its future changes

What we have so far

The work can be grouped by functionalities on the frontend (clients) and backend (ipfs service + ipfs cluster).

I can describe the progress of the backend functionalities and @alexey.lunin might describe what is happening on the frontend.

On the backend we have a local dev environment with docker-compose which contains:

  • a container running IPFS node
  • a container running IPFS service with an exposed HTTP API

The implemented functionalities so far are:

  • request authentication
  • content upload
  • content download
  • announce content to the IPFS network which should help with query performance

The last point is not tested in real conditions, because announcements work when our IPFS node(s) are partly visible on the internet, which cannot be done with a local dev environment. @zdravko put a lot of efforts to deploy the backend parts on k8s, but we still don’t know how to expose the 4001 port of the IPFS nodes, so that they can participate in the content routing and announcements of the global IPFS network. He still has some ideas that will try, but this remains a WIP for now.

What remains to be done

  • Secure and properly configured IPFS cluster for dev/staging/prod environments
  • Configuration of storage mechanism for the IPFS cluster (e.g. backblaze or mounted storage in the Cloud)
  • Deployments for the IPFS service on dev/staging/prod environment
  • Rate limiting, metrics and tracing in case we decide to go in production
  • Frontend development for encrypting and storing attachments in IPFS
  • Frontend development for fetching and decrypting attachments from IPFS

Unfortunately the last point is very uncertain and probably from now on it’s best to focus on it, because if it turns out that announcements and content discovery doesn’t work well, I guess it may change our plans to use IPFS as a whole. I mean, if a user has to wait 30-60-120+ seconds to see a web page, then probably this feature won’t make much sense. To test this we need to deploy an IPFS node in the Cloud and open its 4001 port to the internet, and try various node configuration options + announcements.

To wrap up: we have an API to upload, download and announce content. We need to further configure a cluster and do the frontend part.

Please comment if you have questions and suggestions.

@georg.greve @kalincanov

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_6 Thu, 07 Oct 2021 11:28:41 +0000 community.vereign.com-post-680
Adding decentralised storage feature to Vereign SEAL
luben:
  • We’ll need to have scheduled backups of the data (as is with conventional storage).

Using Backblaze should actually eliminate or at least dramatically de-prioritise that requirement for the moment…

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_5 Fri, 24 Sep 2021 09:46:22 +0000 community.vereign.com-post-670
Adding decentralised storage feature to Vereign SEAL Thank you for the summary, @luben !

That is why I believe all add-ins should include IPFS nodes by default, see “Considerations for integration” in the original post.

That way writing is instantaneous, because it is local, and the CID is available right away for sending.

The data so written to IPFS can then be synchronised to the network asynchronously as the mail itself is getting wrapped up and sent. FWIW, we can upload to IPFS the moment something gets attached, so likely several seconds, perhaps even minutes, before something is sent.

In any case, sending a mail should always trigger a synchronisation with pinning to our own IPFS cluster, which can proceed asynchronously in the background, as described.

Note: There will be a short window of potential data loss, basically a race condition of “user sends mail and then immediately uninstalls add-in including the local IPFS node before the data could be synchronised” - but we may be able to mitigate this condition in a couple of ways, plus it does not seem like a very likely path of action for a normal user.

Be that as it may: There is a strong incentive for us to always keep things synchronised, and thus trigger synchronisation as quickly as possible whenever sending mail – if only to make sure the gateways along the path and the recipient have the required data available to process the sealed message.

This would seem to spell the following technical steps:

  • Add IPFS nodes to our add-ins, which translates to adding GitHub - ipfs/js-ipfs: IPFS implementation in JavaScript to our add-ins
  • Create IPFS cluster for Vereign that we can synchronise data to
  • Build API that allows us to
    • Trigger SYNC & PIN (with time parameter)
    • Write data to our IPFS cluster & PIN (with time parameter)

That API must require authentication, and as written above, I would propose to re-use or re-build

https://nft.storage/api-docs/

for this API, extending it for the “SYNC & PIN” operation.

FWIW, we also want to add payment to this operation, as part of our work on https://community.vereign.com/t/technical-questions-in-preparation-of-the-token-sale/314/2 but that can likely happen in the second step.

In the first step I think it is crucial we build this out as an attachment/body feature first on top of our existing product, allowing us to get practical experience with all the implications and pitfalls. All this work will then be useful for our work around the token sale, as well as switching to full Mailtree mode.


Besides leveraging things like Filecoin in ways similar to

and others, we may also continue to use Backblaze for this through a combination of

and

which would be the smallest possible change, and would allow us to benefit from the extremely advantageous Backblaze storage costs.

My preference would be to go this path at first, as it would allow us to provide this service for the time being similar to what Protocol Labs does with

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_4 Fri, 24 Sep 2021 09:45:10 +0000 community.vereign.com-post-669
Adding decentralised storage feature to Vereign SEAL Hello! This is a follow-up summary of the technical meeting we had yesterday with @georg.greve, @zdravko, @alexey.lunin and me.

Please keep in mind that my thoughts and opinions are constraint by my understanding of the IPFS network and the SEAL project itself, and there are still many many things which I don’t understand well. Please correct me and expand on this however you see fit.

IPFS

The architecture of IPFS will lead and affect our own architecture for storing and fetching data. We’ve considered the following points.

1. Writing data to IPFS

Writing data to public IPFS gateways is NOT reliable and NOT recommended.

This is understandable, because if a gateway allows everyone without any authentication to upload content, the gateway itself will be overloaded with issues - active and passive monitoring for abusive content, storage and bandwidth costs, DOS attacks, etc.
All public gateways are used for fetching IPFS content and not for uploading new content. Even if a gateway seems to allow content uploading (as of now), there are no guarantees for the reliability or availability of the service. It may stop to accept uploads whenever the owner or rate limiting algorithms decide to.

This means that we must have our private internal IPFS cluster of nodes for writing data. These nodes won’t be exposed externally and will be accessible from our backend services only.

Extensions (clients) will send the encrypted attachments to a HTTP backend service and it will handle the upload in IPFS. The service will require authentication with a Vereign token, so only logged-in users will be allowed to upload data.

Here I see the following challenge: Clients must include the identifiers of the uploaded attachments in the SEAL (hash or CID of the IPFS content), but we don’t want the email client (and the user) to wait for the upload to finish, so that the email can be sent. I’ll be reading the docs to see if we can calculate the CID before the uploading takes place, so the backend service can respond with the CID immediately to the client or even better - if the clients themselves can calculate the CID on the data before sending them to the backend, the UX will be best. This issues comes from the fact that IPFS content is not addressable by a filename that we can generate or specify (as is the case with traditional storage), but instead it addresses the content by itself.

2. Getting data from IPFS

We discussed different options and tradeoffs and how we can make the experience more optimal.

One option is for clients to always fetch data from our own IPFS cluster. This should have good performance, but is missing the point of decentralized network usage.

@georg.greve suggested some other options and I’ll try to summarize them.

  • We can somehow notify public IPFS gateways like Cloudflare which content is available at our IPFS cluster, so that when they receive requests from clients, they need not search the entire IPFS network for the content, but immediately fetch it from us. This is a great option which we’ll have to research.
  • Clients can have built-in IPFS javascript nodes which can also be preconfigured to look for our content in our IPFS cluster directly. This will probably be a later step as it will require more development on the frontend than we’ll need initially, but it also sounds good for faster access.
  • Clients can upload the files directly to their IPFS javascript node - this sounds good, but I’m not sure how reliable it is. What happens if the user close his tab/browser after he send an email and the uploaded content is not yet synced with the IPFS network and our IPFS cluster? Will the IPFS module still work under the hood with the tab closed? And with the browser itself closed, it will probably not work for sure?
  • Another good option he suggested is a feature of the IPFS network which can be used to trigger a caching event on a public gateway for a particular content (CID). This will be very useful as most emails are opened by recipients relatively soon after they’ve been sent (e.g. 1-5 days), so this caching may speed up the fetching of data significantly.

Operational Challenges

  • We’ll still need to store all of the data ourselves forever (or 10 years), because we cannot force the network to store it for us. This means we’ll either need to pay for conventional storage and/or use 3rd party services that will store the data for us under an agreement, so that we can be sure that nothing disappears.

  • We’ll need to have scheduled backups of the data (as is with conventional storage).

  • We’ll need to administrate and operate a (secure) IPFS cluster with its own storage (sysadmin, devops work).

Please comment if I missed or incorrectly described something. The input from the frontend team will also be very helpful as a lot of work will happen there, especially if want to implement IPFS nodes in the client extensions.

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_3 Fri, 24 Sep 2021 09:14:56 +0000 community.vereign.com-post-668
Adding decentralised storage feature to Vereign SEAL Highly relevant and interesting post from other thread:

https://community.vereign.com/t/token-idea-personal-professional-email-token-pet/317/6

We should consider building our own IPFS storage API based on

so we can re-use all of

including things like

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_2 Fri, 24 Sep 2021 07:45:56 +0000 community.vereign.com-post-666
Adding decentralised storage feature to Vereign SEAL As also raised in our internal primers on W3C Decentralised Identifiers and the DIF Sidetree protocol, one goal for Vereign SEAL will be to move toward a scaleable approach of self-hosted, corporate and service nodes as part of switching to what we internally have started calling “Mailtree protocol” as its design and function is very much influenced by the Sidetree protocol itself.

One of the core components of this kind of approach is the Interplanetary File System (IPFS) to store data in an immutable, decentralised and fault tolerant way. Switching to Mailtree will require all our clients – most importantly the Add-Ins – to connect to IPFS.

For the user experience, IPFS will play a major role especially in terms of speed and convenience.

So we need to get the integration of IPFS right and should dedicate a whole product cycle on this topic to make sure we understand all the implications, can measure the different performance impacts, and can make adjustments or develop best practices before the whole functioning of the entire system depends on this component.

Feature Idea Outline

To the layperson, Vereign SEAL – a.k.a. “verifiable credentials for email”, “digital provenance for email and attachments” – is effectively a better version of registered mail. Digital, decentralised, peer-to-peer, more convenient, cheaper, more efficient and with far higher value in terms of securing evidence.

That is why our marketing will highlight the idea of “Registered Mail 2.0” “Digital Registered Mail” and “Decentralised Registered E-Mail” as themes in order to help wider audiences understand what Vereign SEAL provides. Why do people send registered mail? Most often because they want proof that they provided, sent, did something.

Traditional registered mail is effectively only proving that someone sent an envelope. Vereign SEAL can prove WHAT was sent, including attachments. We can prove this by virtue of hashes which are part of the verifiable credential. But this approach requires that users provide the mail or file itself when trying to prove what was sent, exactly. Verification can either be done in the add-in, or requires manual generation and comparison of hashes.

That is not very convenient and may regularly prove too hard to follow for legal professionals that may be involved in judging whether proof has been provided.

Now imagine that the EMAIL ITSELF, as well as all its ATTACHMENTS were stored encrypted in IPFS.

As a result, the web verification app can display the email message that was sent, and provide the attachments for direct download. Because of the properties of IPFS and because of the way in which the verifiable credential itself is formed and secured against the blockchain, both the mail and its attachments would be GUARANTEED to be EXACTLY as they were sent.

In other words, someone trying to prove this is the contract they signed could just share the verifiable credential with the court and tell them: “Here is what I agreed to. Feel free to download the original contract directly from the receipt.” and it is GUARANTEED to be correct and identical, and extremely easy to use.

Because IPFS is content addressable storage that only distributes files on demand, we can do this in a way that is compliant, is not easily data mined, and will work in a globally decentralised fashion.

And not only would this be a feature that would add a lot of value to Vereign SEAL immediately, it would also allow us to build practical experience with IPFS, including its performance and how we can ensure that the overall user experience is good.

Considerations for integration

Because speed is of utmost importance, adding IPFS means we should add IPFS locally whenever possible. Doing so will make storage of data while sending a LOCAL operation, allowing mails to be sent faster, allowing the clients to proceed with sending more quickly.

Note: For cloud based services, the local device may be further away. So there it might be better to write to an IPFS instance run by Vereign in the corresponding cloud infrastructure - making it as local as possible. So each client will need to take its data flow patterns into account.

Some clients are therefore likely to need to support more than one approach, e.g. for Outlook on the desktop vs Microsoft 365. They should therefore have an order of preference and priority to use (highest priority / preference first) for SENDING:

  • Local IPFS node in the same cloud, if cloud based – OR –
  • Local IPFS node configured by administrator (e.g. for companies self-hosting)
  • Browser based IPFS node, where possible and the browser offers it, e.g. IPFS Support in Brave | Brave
  • Add-In IPFS Node via JS-IPFS
  • Last fallback: Fallback IPFS Node operated by Vereign itself

For READING/VERIFYING we can start with the same list, but this is the case that is more likely to be slow, and we may need to play with this and tweak things to work as intended. So clients MAY in fact find themselves with a different approach / list for VERIFYING.

In any case, ALL clients – including the web verification app – should include JS-IPFS by default.

Other Changes

Other places we need to introduce changes for this feature

Configuration

Storing the email and/or attachments into IPFS should be optional.

So we may need configuration of default behaviour, or a convenient way to toggle behaviour.

We will likely also need to allow configuration of preferred IPFS node to use, with sane default.

Sending

Sending with storage into IPFS means we need to

  • generate symmetric encryption key (for AES-GCM, most likely)
  • encrypt message / attachments with key
  • store message / attachments into IPFS
  • store encryption key into verifiable credential / SEAL
  • store URIs of message body and attachments into verifiable credential / SEAL along with the file hashes we currently store
  • send SEAL normally

SEAL / Verifiable Credential Data

The data schema for SEAL verifiable credentials therefore needs to be extended to support

  • AES-GCM private key for content encryption
  • IPFS URI for message in IPFS
  • IPFS URIs for attachments in IPFS

Web Verification App

The web verification app needs to see whether message body and / or attachments are available, get the key from the SEAL, retrieve the attachments and message, decrypt them, and

  • display the message, if available
  • offer the decrypted attachments for download, if available

Also, this process should be as non-blocking as we can make it.

Your turn

I hope this explains the rationale and intended behaviour well enough to allow everyone to think it through and provide insights as to what might have been overlooked, as well as suggestions about how to implement, exactly, and how to split up the work.

giphy

]]>
https://community.vereign.com/t/adding-decentralised-storage-feature-to-vereign-seal/313#post_1 Tue, 14 Sep 2021 13:50:04 +0000 community.vereign.com-post-652
ChangeLog 12th August Service

Web verification app v1.1.4

Changed

  • UX improvements

Auth v0.0.3

Changed

  • Added support for external identity provider
]]>
https://community.vereign.com/t/changelog-12th-august/308#post_1 Fri, 13 Aug 2021 08:05:07 +0000 community.vereign.com-post-639
Constantly asking to login in Seal Outlook add-in Problem

When using Vereign Seal Outlook add-in you got constantly login screen.

Solution

This issue usually is related to your cookies. If you make your cookies setting too restrictive, the issue will occur.

Here it the fix

Chromium based browsers

Go to SettingsCookies and set them to Block third-party cookies in Incognito

On Firefox

Go to SettingsPrivacy and securityStandard

]]>
https://community.vereign.com/t/constantly-asking-to-login-in-seal-outlook-add-in/306#post_1 Mon, 26 Jul 2021 08:05:14 +0000 community.vereign.com-post-628
Introduction of the @vereign/lib-seal I’m ok with this solution

]]>
https://community.vereign.com/t/introduction-of-the-vereign-lib-seal/298#post_3 Thu, 10 Jun 2021 14:35:30 +0000 community.vereign.com-post-612
Introduction of the @vereign/lib-seal I’m ok with the proposed solution.

]]>
https://community.vereign.com/t/introduction-of-the-vereign-lib-seal/298#post_2 Thu, 10 Jun 2021 13:41:02 +0000 community.vereign.com-post-611
Introduction of the @vereign/lib-seal @alexey.lunin @zdravko hey guys.

This discussion is a follow-up of the https://community.vereign.com/t/a-time-to-consider-publishing-our-javascript-libs-to-the-outer-world/283/10?u=markin.io.

Let me start directly from the painful points in the current situation with iframe, @vereign/vframe, and @vereign/light-utils

Drawbacks of the current libraries architecture

  • @vereign/light-utils is heavy and 95% of it’s functionality used by iframe and web-verification-app only.
  • @vereign/vframe and iframe needs to use shared code of public API errors (MimeVerificationError, SealAPIError, StatusAPIError). That’s the reason of having them sticked in a single repo.
  • There’s certainly confusion between iframe and @vereign/vframe during the discussions.
  • If a third-party integrator wants to read seals from emails and load iframes, he/she needs to install both @vereign/light-utils and @vereign/vframe which is really inconvenient.

Proposed solution

So, after some thinking I came up with the idea of aggregating existing functionality into a lightweight integration library called @vereign/lib-seal which is going to expose:

  • IframeService - to load iframe APIs
  • SealService - to extract seals from MIME
  • List of public API errors

With this optimization, Integrators (Chrome Extension and Outlook Add-in) will be able to sufficiently cover the verification routine at their side using only a single library. And pretty much the same goes for the signing routine.

What’s going to happen to the old libs

  • @vereign/vframe - gets removed
  • @vereign/light-utils - preferably it should be used only by Web Verification App and Iframe. Chrome Extension and Outlook Add-In will be able to work without it.

List of dependents for @vereign/lib-seal

  • Chrome Extension and Outlook Add-in. Using the whole API of the library.
  • iframe - uses Public API errors only
  • @vereign/mime-normalizer - needs only SealService in tests
]]>
https://community.vereign.com/t/introduction-of-the-vereign-lib-seal/298#post_1 Wed, 09 Jun 2021 13:43:42 +0000 community.vereign.com-post-610
Install Vereign Seal extension for Gmail Install Vereign for Gmail extension
  • Go to the Chrome store and install Vereign for Gmail by selecting the Add to Chrome button

image

  • On the pop-up select the Add extension button

image

  • Our extension is now added to your browser

    Note: Vereign for now supports all Chromium based browsers, such as Chrome, Brave, Opera, Microsoft Edge (latest version)

  • Check if the Vereign icon is shown at the top bar on the right-hand side of your browser window. If it isn’t, click on the Extension icon showing a puzzle piece and select Pin Extension right next to Vereign for Gmail.

Extenstion

  • Close your browser and start it again. This will initiate our extension.

  • Since it is your browser extension that seals your Gmail messages, you will be required to additionally enter your Gmail password whenever the browser session ends and restarts. This is for your own security to ensure the right ownership of the account. You will be getting the following pop-up:

Related topics

Seal messages with Vereign for Gmail

Verify received email in Gmail

]]>
https://community.vereign.com/t/install-vereign-seal-extension-for-gmail/294#post_1 Tue, 25 May 2021 13:01:40 +0000 community.vereign.com-post-596
Client support: What's next I had actually already tested Edge + Vereign seal chrome extension and it was working.

Retested now again to make sure and to get a screenshot.

All we have to explain to users is to Enable the other stores option and go to chrome store to install the extension on their MS Edge.

Left screen is Chrome as Receiver. Right is Edge as Sender.

]]>
https://community.vereign.com/t/client-support-whats-next/285#post_2 Tue, 27 Apr 2021 10:29:41 +0000 community.vereign.com-post-578
Outlook Desktop | Refinement of the QR code attaching procedure
markin.io:

Is it enabled by default? In which circumstances it might be disabled?

As mentioned here, the add-in should not rely on SSO in certain error situations, such as when the user switches to a client that does not support it, or if the user’s mailbox is on an on-premises Exchange server. I think it’s not enabled by default, for example our tenant does not have SSO enabled currently.

If the user tries to seal and send the message without recipients, sendMail will throw an error which should be handled properly in the UI. Not sure how this case is handled currently, so it actually might be an improvement of the UX.

I agree this needs more thorough research before taking a decision how to proceed with onSend.

I just tested it using msal-react and it worked for desktop, which gives much better UX, I think. I’ll do more tests and hopefully this would be the method to use for authentication on desktop, which would be a much cleaner way.

]]>
https://community.vereign.com/t/outlook-desktop-refinement-of-the-qr-code-attaching-procedure/252#post_7 Mon, 26 Apr 2021 12:09:04 +0000 community.vereign.com-post-577
Outlook Desktop | Refinement of the QR code attaching procedure Thank you for a thorough research and lots of valuable details. Great job!
Last Friday I was really impressed by the improvement of the speed of sending email.

Looks like we’re getting really close to have our solution working on Desktop, and I believe we have a decent chance to have it ready for QA in the next milestone.

I shall prepare an update of the mime parser and seal lib in order to be able to extract replied/forwarded parts of emails sent by Outlook Desktop.

Also, if you manage to finalize implementation of the email sending this week, I will be able to spend the next one covering test cases for MIME normalizer.

Let me ask a couple of questions.

but for sure there needs to be a fallback for the cases when SSO is not enabled.

Is it enabled by default? In which circumstances it might be disabled?

…but for desktop there’s an issue with initially signing in the user with the login popup:

I’ve found a couple of threads explaining similar issue:

The problem seems to be related to the inability of Desktop client to open popups, which seems fair.

This piece of documentation says that in such cases app has to utilize redirect method of the MSAL authentication. Can we try this option for the Desktop app?

…sendMail… Implementing this endpoint would require small UX changes since the request body should contain the whole message object

Can you please elaborate? I’m not sure I understand how this might affect the UX.

The onSend version won’t work with Graph API, because of the usage of the sendMail endpoint.

You’ve mentioned that there’s a method which can be used to update email body only, but there’s some issue with synchronisation, and in the end seal is not being added to the email.

I suggest to go with the sendMail functionality in order to push Outlook Desktop to the users quicker and create a ticket to research a problem with the body update Graph API method with all details and initial MR provided. So that we know where to start.

So, to summarize, here’s what I suggest:

  • Go with SSO as the main method of authentication
  • As a fallback for Web, implement popup MSAL authentication
  • As a fallback for Desktop, implement redirect MSAL authentication
  • Finish implementation of the sendMail endpoint and utilize it for both Web and Desktop.
  • Produce test cases for MIME normalizer (I think that coverage of Windows-Windows and MacOS-MacOS should be enough for now)
  • Create a ticket and initial MR with the research of the functionality that updates only email body, and will allow us to use onSend functionality.
  • Clean up backend which exposes proxyEWS API endpoint
  • Create a ticket to track progress of the Exchange hybrid deployments and leave it for further implementation
]]>
https://community.vereign.com/t/outlook-desktop-refinement-of-the-qr-code-attaching-procedure/252#post_6 Mon, 26 Apr 2021 09:02:24 +0000 community.vereign.com-post-574
Client support: What's next Vereign Seal is currently supporting Microsoft 365, Outlook and Gmail via Chrome based browsers, such as Google Chrome and Brave. Microsoft Edge is Chrome based and has its own app store at

https://microsoftedge.microsoft.com/

This app store does not automatically pick up extensions from the Google Chrome store, but the user can enable the Google Chrome Extension store to install add-ins, including Vereign:

We should explain to our users how to do this, but it is suboptimal to require users to take so many steps in order to install Vereign Seal to Microsoft Edge. We should however do at least occasional tests against this, @QAs .

But it stands to reason that adding Vereign Seal to the Edge Extension store is mostly a matter of policy and business relationship. We should find out how to make this happen.

So, with the caveat above, we have support for

  • Microsoft 365
  • Outlook
  • Gmail in
    • Google Chrome
    • Brave
    • Microsoft Edge

which begs the question: Where are the biggest gaps in our coverage?

Based on Email client usage worldwide, collected from 1.15 billion email opens,

https://emailclientmarketshare.com/

provides a popularity rating of

Rank Client Market Share
#1 Apple iPhone 38%
#2 Gmail 30%
#3 Apple Mail 11%
#4 Outlook 7%
#5 Yahoo! Mail 5%
#6 Google Android 2%
#7 Apple iPad 1%
#8 Outlook.com 1%
#9 Samsung Mail 1%
#10 Windows Live Mail 0%

Which means by popularity, the next targets should be:

  • Apple iPhone
  • Apple Mail

and then

  • Google Android

So it seems the next ecosystem to look at will be that of Apple.

Follow-Up

We should research the extension capabilities for Apple iPhone and Apple Mail and what it would take to integrate Vereign Seal into them.

@kalincanov Would you please put this on the list as research for the next month?

]]>
https://community.vereign.com/t/client-support-whats-next/285#post_1 Sun, 25 Apr 2021 11:14:44 +0000 community.vereign.com-post-573
Outlook Desktop | Refinement of the QR code attaching procedure During the last couple of weeks I’ve been reading lots of documentation related to MS Graph API, so I’ve researched and practically tested not only attachment of the seal but using Graph instead of EWS and Outlook Rest API (which will be fully decommissioned in November 2022) throughout the entire add-in.

So I’ll try to summarize my findings and conclusions here:

1. Authentication and authorization

To call the MS Graph the add-in must acquire an access token from the Microsoft identity platform and be authorized by the user to access their Microsoft Graph data. Microsoft recommends using some of their authentication libraries, depending on the app type, to do this.

I think the best way to authenticate is using SSO, but for sure there needs to be a fallback for the cases when SSO is not enabled. For those cases MSAL works great for web because it caches the token which then can be obtained silently, not bothering the user at all, but for desktop there’s an issue with initially signing in the user with the login popup:
image

So the only way I found to authenticate a user is through the Office dialog UI, which would have to ask the user to authenticate every time he opens the add-in.

Authorization occurs only once when the user installs and uses the add-in for the first time via a dialog which asks for their consent.

I tried using the msal-browser and msal-react (although it’s still in preview). They are based on MSAL v2.0 that implements the Auth Code flow which is a significant improvement compared to v1.0 (microsoft-authentication-library-for-js/lib/msal-browser at dev · AzureAD/microsoft-authentication-library-for-js · GitHub).

Conclusion:
We can implement SSO (I’ll open a separate ticket for this), but there needs to be a fallback, which as explained above may be different for web and desktop for the sake of the better UX which can be achieved with MSAL on web.

I’ll reference here some samples which helped me with the auth implementation:
React sample
Desktop client sample

2. Usage

  • Getting the MIME of a message using MS Graph works well for both signing and verification on web and desktop. The add-in verifies emails sent before that change, too.

  • Attachment and appending of the seal to the message works for both web and desktop using the sendMail endpoint which requires the Mail.Send scope in the Microsoft Identity Platform. Implementing this endpoint would require small UX changes since the request body should contain the whole message object (user: sendMail - Microsoft Graph v1.0 | Microsoft Learn)

  • We can use the Graph API to entirely replace usage of EWS

Conslusion:
Graph API will work for both web and desktop, with backward compatibility.

3. On-premise support

All I found on this matter is Graph API supports on-premise deployments of Exchange, but this feature is still in preview, which means until it goes into the so-called GA status, it’s not guaranteed to be absolutely stable.

4. Cons

The onSend version won’t work with Graph API, because of the usage of the sendMail endpoint. This would mean extracting a separate project using onSend, which would not work on desktop client.

]]>
https://community.vereign.com/t/outlook-desktop-refinement-of-the-qr-code-attaching-procedure/252#post_5 Fri, 23 Apr 2021 11:53:47 +0000 community.vereign.com-post-571
Seal | Create/Verify performance review a.k.a "Bring keys back home and marry your Identity" @georg.greve as promised, getting back with the performance tests of the seal creation/verification routines.

For those who are not aware

On one of the previous demos Georg asked what might be the cause of possible slowdown of the seal creation process.

I mentioned that one of the things not under client’s control are backend calls related to signing/decryption/obtaining of the keys. (Key management calls for further reference)

Performance test

Test only covers seal creation and email verifications functions.

Status submission/verification and seal tail submission are not a subject of this test, because they are out of the scope of Key management logic.

Test has been performed 100 times and average values in seconds are provided.

TL;DR

Create seal function

  • CreateSeal.Total: 1.75s // Avg time to create seal
  • CreateSeal.BackendCalls.Total: 0.87s // Backend calls included in total time

Conclusion: Key management backend calls taking 50% of seal creation time.

Verify seal function

  • VerifyEmail.Total: 3.51s
  • VerifyEmail.Backblaze: 3.07s // Average time it takes to obtain seal tail from CDN with the average size of 3.4kb
  • VerifyEmail.Backend: 0.16s // Average time it takes to perform /qrcode-hsm/decrypt
  • VerifyEmail.VerifySignatures: 0.12s // Average time it takes to verify all MIME signatures using WebCrypto on the client

Conclusion: obtaining of the seal tail from the CDN is the most time consuming operation, although it’s very volatile. I’ve been noticing a spreading betwen 300ms-6s to download tail with the size of 3.4 kb

The whole report

Create seal function

  • CreateSeal.Total: 1.75s // Avg time to create seal
  • CreateSeal.BackendCalls.Total: 0.87s // Backend calls included in total time
  • CreateSeal.MimeSignaturesCalculation.Total: 0.58s // Avg time to calculate MIME signatures
  • CreateSeal.MimeSignaturesCalculation.Backend: 0.4s // Relevant /hsm/sign calls to backend
  • CreateSeal.EncryptSealData.Total: 0.38s // Avg seal encryption time
  • CreateSeal.EncryptSealData.Backend: 0.31s // Avg time for /qrcode-hsm/encrypt call
  • CreateSeal.CreateSealImage.Total: 0.36s // Avg time to create and prepare seal image
  • CreateSeal.CreateSealImage.CreateQrCode: 0.26s // creates QR code and places it on png template
  • CreateSeal.CreateSealImage.AddMetaData: 0.01s // adds PNG metadata
  • CreateSeal.CreateSealImage.PrepareHtmlTemplate: 0.07s // Prepares HTML template

Verify email function

  • VerifyEmail.Total: 3.51s
  • VerifyEmail.Backblaze: 3.07s // Average time it takes to obtain seal tail from CDN with the average size of 3.4kb
  • VerifyEmail.Backend: 0.16s // Average time it takes to perform /qrcode-hsm/decrypt
  • VerifyEmail.VerifySignatures: 0.12s // Average time it takes to verify all MIME signatures using WebCrypto on the client

Conclusion

It worth to consider local key management strategies for the seal project.

Benefits:

  • Better security
  • Improved performance
  • Decentralisation

Challenges:

  • Multi client/device key management
  • Key recovery and transferability to another devices

At the beginning of the seal project, we wanted to store keys in some persistent storage of the Outlook Add-in/Chrome Ext/Gmail Add-on. However there was no way to do it at that point in time.

Now we have iframes, and we can manage keys similarly to the Vereign Identity approach, which also might be a one step closer to marrying with Vereign Identity

@perkon please review

Ticket in GitLab

(https://code.vereign.com/seal/building-blocks/iframe/-/issues/9)

]]>
https://community.vereign.com/t/seal-create-verify-performance-review-a-k-a-bring-keys-back-home-and-marry-your-identity/282#post_1 Wed, 14 Apr 2021 11:27:50 +0000 community.vereign.com-post-562
Request for Input: Decentralised Identity Public Key Infrastructure (DIDPKI) Disintermediation and decentralisation are two of the major benefits and promises of blockchain, so there have been several proposals about how to use blockchain to eliminate weaknesses in today’s certificate infrastructure.

Game of Keys: Too Much Information About Certificate Authorities provides an easily understandable introduction into some of the challenges.

The existing issues can be summarized as

  • Centralisation - a single point of compromise can break security for millions of certificates;

  • Lack of transparency - the inner workings of Certificate Authorities, and thus the ability to verify the correct function is extremely limited and typically given only to auditors with a financial incentive to approve;

  • Concentration - traditional Certificate Authorities are mostly in sunset mode with increasing costs for security and compliance, and decreasing revenue from having to compete with free services, especially Let’s Encrypt. The result is a growing consolidation and thus concentration of the trust infrastructure into fewer and fewer critical points.

There is a fairly long list of compromised Certificate Authorities over the years. So the question “What if we could avoid centralised trust?” has become a fairly obvious question to ask.

There are a couple of interesting proposals on the subject, either based on existing chains, such as Ethereum, Bitcoin, Namecoin, or on custom consensus amongst the different PKIs. A Decentralized Dynamic PKI based on Blockchain - Lund University is a recent publication with a pretty good overview.

The proposal made in the paper is interesting, but introducing a custom consensus makes the system susceptible to the same kinds of other 51% attacks that can be observed in the wild - so without sufficient adoption, and without an economic or resource based protection mechanism, the resulting trust level is unclear.

Also given the emerging standards in this area it would be preferable if it was based on Decentralized Identifiers (DIDs) v1.0 and linked to the work done by the Identity Foundation because ultimately the role of certificates is to establish a link between a person, an organisation or a device and the digital interaction that is given validity and trust by the certificate.

In other words, the most scalable, interoperable and valuable implementation of a decentralised, disintermediated PKI on top of blockchain would likely best be described as Decentralised Identity Public Key Infrastructure (DIDPKI).

Initial Thoughts

Such a DIDPKI should likely meet a couple of requirements, such as

It seems that much of DIF Sidetree Protocol would be re-usable for this purpose, although Bitcoin may be too expensive a chain for the frequency with which one might generate certificates. The use of IPFS for Content Addressable Storage (CAS) as the basis for Conflict-free replicated data type repository of self-certifying data seems like a very good basis for such a DIDPKI and the resulting structure of DIDPKI nodes would likely look a lot like that of the Sidetree Network:

sidetree-system

Input Sought

This is primarily a collection of some preliminary thoughts, looking for some input and potentially volunteers to help build out a specification. Comments and inputs welcome.

]]>
https://community.vereign.com/t/request-for-input-decentralised-identity-public-key-infrastructure-didpki/271#post_1 Sat, 27 Mar 2021 10:55:17 +0000 community.vereign.com-post-530
Seal | UX improvement: Caching of verification results
sasha.ilieva:

Caching totally makes sense and would improve UX immensely.

:heart_eyes:

Yes, exactly. I think we can achieve a lot this way with comparatively small effort.

I’m fully aware it may not be possible - or it may take the iframe to do some smart magic, perhaps even the caching. If it isn’t possible, we’ll have to live with it.

Agreed. I also think an approach of “show what is cached, lazy update when possible” is the best and most intuitive option here.

That was my hope. If we know this is coming now, we might make our life easier later.

We’ll need to sort out where to do this - in the iframe or in the clients - and then we need to make sure both components understand “cached” vs “updated” information.

Anyhow: Not required / in scope for our first version. But perhaps one of the first improvements afterwards.

]]>
https://community.vereign.com/t/seal-ux-improvement-caching-of-verification-results/260#post_3 Fri, 26 Feb 2021 10:40:37 +0000 community.vereign.com-post-505
Seal | UX improvement: Caching of verification results Caching totally makes sense and would improve UX immensely.

This means caching would be performed when the sidebar has been opened (add-in is initialized) and verification process is finished, which would simplify implementation of caching.

I’ve thought about this case and looked briefly through the Outlook docs, but for now I’m not sure this is possible. This would require more research.

To me the better option, UX-wise, would be to automatically fetch the block info and mail receipt status, while displaying in the respective places of the UI that cached info is being actualized.

Although it’s not a priority for the first version of the iframe rebased product, it would be good to have this feature in mind now, so it’s easier on a later stage to implement it.

]]>
https://community.vereign.com/t/seal-ux-improvement-caching-of-verification-results/260#post_2 Fri, 26 Feb 2021 08:09:24 +0000 community.vereign.com-post-504
Seal | UX improvement: Caching of verification results Using SHA256(sealUrl) we have a unique ID for each seal which is the key to all kinds of verification including the blockchain record and using an IFRAME approach for verification, that ID becomes the most important input parameter to the verification API, and it returns us the verification status and corresponding information.

As we also discussed in Status verification inefficiency the speed of verification is a critical part of the user experience.

Users can be expected to trigger verification on each switch between messages, which means if they switch around quickly between messages (e.g. because they are looking for a specific message) even milliseconds will matter to the user.

This means two things to our solution:

  • We have to expect the verification iframe to be started and interrupted / aborted in quick succession;
  • we will see the same message repeatedly, and any kind of speed improvement will matter greatly.

To me, this looks like a situation where we might benefit greatly from caching.

Caching: Behaviour

The cache should only cache information for mails that the user has actively looked at. In other words: No actively going out to mails in the inbox, looking for mails to potentially verify. But if a user selects a mail and verifies it, that result should be cached.

If at all possible we should do the same for messages that have been selected where the verification has been started, even if the user switches to another mail. That way we don’t waste resources on triggering new and aborting them - we will still have the result, but it will be put straight to cache instead of being displayed to the user, since the user has already moved on.

Caching: Threat Scenario

What would it mean to no longer verify each message when it is viewed, but re-using old information? An attacker might be able to poison or manipulate the local cache, the verification information might be manipulated. But if we keep the cache strictly local and restricted to the browser/device, that risk seems comparable to an attacker having enough control to also intercept actual verification when it happens, and manipulate that result. So this does not seem to add significant risk.

The cache is somewhat short lived in any case, as it would be wiped when local storage gets expired, deleted, refreshed, and we can actively time it out in a couple of days, at most.

Caching: User Experience

A cache would mean that the information may be (slightly, see above) outdated, but it would be instantaneous. If we keep the return value from the iframe from verification and store it with SHA256(sealUrl) as key, the complexity of this mechanism would seem to be comparatively small, but the benefit might be enormous.

The most outdated information displayed will be the block height (the number of blocks written since the mail was sent) and potentially the mail receipt status. The rest of the information is static and should not change.

So from a UI perspective, we would likely want to show that the information is cached, and offer the user to actively refresh / verify again.

Alternatively we only refresh the information of the block height and received status actively for cached information, start out by showing the cached information, and updating it once we have an update.

This is not a priority for the first version of the iframe rebased product, but I would really like your thoughts and input on this, @perkon @markin.io @sasha.ilieva @zdravko

]]>
https://community.vereign.com/t/seal-ux-improvement-caching-of-verification-results/260#post_1 Thu, 25 Feb 2021 12:06:34 +0000 community.vereign.com-post-503
Seal messages with Vereign for Gmail Overview

Sealing and verifying are two different operations and for ease of use these are performed by Vereign for Gmail Extension.

Seal messages

To Seal your emails, first you need to install our Vereign for Gmail Extension.

Sealing an email is very easy - no different than sending an email as usual

  • Login to your Gmail
  • Make sure you have entered the password required for additional security
  • Click on Compose to create a new email
  • If you see the notification “Message will be sealed and secured by blockchain”, then your extension is enabled. Just start writing your email.

Note If you don’t see “Message will be sealed and secured by blockchain” , you can enable it by clicking on the Vereign icon in the bottom toolbar and shifting the lever by clicking on it again:

  • Complete your message and click on the Send button. This triggers the creation of the seal in the form a QR code.

Your message is now “sealed at origin and secured by blockchain”.

Related topics

Install Vereign Seal extension for Gmail

Verify received email in Gmail

]]>
https://community.vereign.com/t/seal-messages-with-vereign-for-gmail/256#post_1 Thu, 11 Feb 2021 13:20:36 +0000 community.vereign.com-post-449
Outlook Desktop | Refinement of the QR code attaching procedure
markin.io:

I propose to leave Outlook Web Version with Option 1 unless something indicates that it cannot be used together with Option 3.

Excellent. That means we can keep this as is and work forward from there. I like it.

]]>
https://community.vereign.com/t/outlook-desktop-refinement-of-the-qr-code-attaching-procedure/252#post_4 Mon, 08 Feb 2021 16:42:35 +0000 community.vereign.com-post-403
Outlook Desktop | Refinement of the QR code attaching procedure

Are you proposing to use two methods for sending, depending on the client? In other words: Will Outlook on the Web stay with Option 1 and only Outlook on the Desktop will switch to Option 3 then? Or would both switch to Option 3?

I propose to leave Outlook Web Version with Option 1 unless something indicates that it cannot be used together with Option 3.

]]>
https://community.vereign.com/t/outlook-desktop-refinement-of-the-qr-code-attaching-procedure/252#post_3 Mon, 08 Feb 2021 16:27:47 +0000 community.vereign.com-post-401