MinexBank Wear application for Android smartwatches.Prepare your watch for MinexBank Wear
Hey MNXers! Our development team wants to please you with one more slight but handy product — a MinexBank Wear application for Android smartwatches. That’s right, now you can monitor all the MNX data just by looking at your wrist.
application does not require authorisation. Just open it on the watch,
and you’ll get the information about stats, price, trading volume or MNX rank in the top of cryptocurrencies in the real time.
But most importantly that the app can be used to monitor parking rates of MinexBank.
To do this just swipe it to the right or left. We also wish to commend
it’s only the first version of the product, and its functionality will
be significantly expanded in the future.
A Conversation With Lisk Backend Developer and SocketCluster Founder Jon Gros-Dubois
Lisk Backend Developer Jon
joined Lisk as a Backend Developer last month. He is also the founder
of SocketCluster, a software framework designed to simplify development
of highly scalable systems, which can send and receive data in real-time
betweens users and machines. We had a chance to sit down and chat about
what he does at Lisk and learn a bit more about SocketCluster.
Q: I noticed you have an interesting accent. Where are you from?
I was born in Guadeloupe, a French island in the middle of the
Caribbean. I lived there until the age of 11 before moving to Australia.
I lived in Australia for many years, but settled in Europe two years
Q: When did you begin coding?
I began coding about 14 years ago at the age of 14. I have experience
in C/C++, C#, Python, Java, PHP, AVR Assembly, ActionScript and
flexible and expressive; it lets developers focus on logic rather than
evolve over time without ever breaking backwards compatibility; this
still run on all of the latest engines.
Q: Why should people care about decentralization and blockchain?
For the past 20 years, I have witnessed the increased centralization of
everything. Companies have been getting bigger and fewer in numbers,
resulting in a concentration of money and power in fewer hands.
Government regulation in many countries has also made it difficult for
small-time investors to participate in the growth of the technology
economy. This has resulted in a large number of people being locked out
of opportunities. Decentralization through blockchain will help to
better distribute and allocate funding so that it is evenly spread out
and accessible to more people. Blockchain projects achieve dominance by
bringing people in instead of locking people out.
Q: How would you describe Lisk?
I think of Lisk as a decentralized democratic economy. Before joining
Lisk, I spent a great deal of time researching the reach, flexibility,
scalability potential and economic incentives of various
cryptocurrencies. I believe Lisk ranks very high in all of these areas,
which makes me very excited about its future.
Q: What makes Lisk different from other blockchain projects?
will be great for ecosystem growth. I believe that the structure of
Lisk’s community/network will make it easier to reach consensus if/when
it comes to big decisions. It should also lead to better outcomes. In
addition to achieving scalability by offloading some of the workload to
sidechains, Lisk’s Delegated Proof-of-Stake architecture should also
make it easier to scale the mainchain if/when needed. Also, the fact
that you can use your voting power to earn ‘interest’ on your Lisk
holdings is brilliant. When you factor in sidechains, I think that Lisk
will strongly reflect what an ideal economic system should look like.
Q: How did you spend your time prior to joining Lisk?
was mostly doing contract work as a software developer/engineer for big
companies and startups across different industries. At the time, I
wasn’t sure which industry I wanted to specialize in, so contracting
allowed me to try different areas. In addition to working full time as a
software engineer, I’ve always kept an open source project on the side.
Open source work is fun; you can find any problem that interests you
and start to implement a solution for it. With open source work, there
are no business constraints, therefore you can think very long term and
pick very difficult problems. You can aim high and learn as you go. My
first major open source project was a web content management system for
developers; similar to Wordpress, except with a user-friendly
drag-and-drop interface. It never became popular, but I learned a lot
from building it, so it was still worthwhile. After that, in 2012, I
started working with Node.js and that’s when I became interested in web
application frameworks and real-time technology. This eventually led me
to start SocketCluster. The common thread in all my open source work so
far has been that I’ve consistently been working on platforms and
frameworks for developers; to give them tools to build websites and
systems more efficiently. Lisk feels like a continuation of that for me.
Q: Tell us a bit more about SocketCluster.
SocketCluster is a real-time (WebSocket) framework, a pub/sub data
transport layer and a protocol. It makes it easier for developers to
build highly scalable systems which can send and receive data in
real-time between users and machines. It’s general purpose so it can be
used for building many different things like chat systems, stock price
tickers, trading systems and pretty much any other system/app which
requires moving large amounts of data between clients and servers in
real-time. I started SocketCluster several years ago because alternative
solutions couldn’t scale beyond a single process on a single CPU core.
CPU design trends meant that CPUs were getting more and more cores, so I
felt that modern systems should be able to automatically make use all
of these cores to get the best performance possible.
Q: What problems does/will SocketCluster solve?
SocketCluster can solve the problem of transferring a potentially
unlimited number of real-time messages across a potentially unlimited
number of machines/nodes using all available CPU cores on each machine.
It can also be used in simpler configurations; it’s designed to be
flexible and extendable. When a system only needs to process a few
thousand messages/transactions per second, things are relatively simple;
this is because that kind of processing can effectively be handled by a
single CPU core running on a single machine. Once you go beyond a few
thousand transactions per second, things suddenly get ugly. Past a
certain point, no matter how much you optimize for performance, the code
may not be able to process any more data; it might use up 100% of a
single CPU core and leave all other cores on the machine essentially
idle. This is partly because for many years, CPUs have not been getting
any faster in terms of raw clock speed; they seem to have peaked at
around 5 GHz (when overclocked). Recent CPU improvements have focused
almost entirely on increasing the CPU core count which is currently
approaching 72+ cores on commercially available high-end CPUs.
Unfortunately, adding more CPU cores comes with a significant tradeoff; a
program can only make full use of available CPU cores if the underlying
task can be parallelized. ‘Parallelized’ in this context means that the
underlying task can be broken up into smaller sub-tasks which can be
processed in parallel (independently, at the same time) as opposed to
serially (one after another). This constraint is also the reason why the
blockchain has been so difficult to scale. SocketCluster’s pub/sub API
supports scalable message passing across many machines/nodes. If setup
correctly, it can scale linearly with respect to the number of messages
as you add more machines/nodes to the network. SocketCluster shards
pub/sub channels across available CPU cores and also across available
machines on a network. SocketCluster basically opens up opportunities
for Lisk to allow it to scale linearly to always be able to reach more
users and to handle more transactions.
Q: Can you describe a typical day on the job for you?
I don’t know if I’ve had any typical days yet. I’m still relatively new
to the Lisk team so I’m still in learning mode. There are a lot of
details to absorb when joining an existing software project. I recently
spent a lot of time writing tests; I spend a fair bit of time reading
and running other developers’ code to understand how things work in
detail. Being new to a project is challenging, but it’s also highly
Q: What skills and technologies are you the most interested in improving upon or learning?
My main areas of interest so far have been scalability and distributed
systems. Now my focus is blockchain technology; there is a lot left for
me to learn and it’s always evolving, so the learning will probably
never stop. I like to come up with hypothetical strategies for scaling
cryptocurrencies, therefore learning about all the main algorithms and
architectures that are used in the industry is a good way to come up
with creative strategies to solve various problems.
Q: What is one piece of advice you would give to someone looking to pursue coding?
You don’t need to be super smart to be a great coder; you just need to
be curious, persistent and reasonable. It’s important to listen to other
people’s opinions. When your code gets too clever or complex, it
sometimes means you need to take a step back and consider alternative
approaches. Coding is a social activity and there is rarely an absolute
right or wrong way to do things.
Q: What industry sites and blogs do you read regularly?
I read a lot of tech-related stuff. Hacker News, Hackernoon,
TechCrunch, VentureBeat, Reddit and lots of different blogs and
publications on Medium.
Q: What do you like to do in your free time?
A: Aside from work, I enjoy contributing to open source projects, reading blogs, and hiking.
Q: What is your favorite book and why?
My favorite book is “Fooled by Randomness” by Nassim Taleb. It’s an
interesting book about the role of randomness in life and in the
markets. It gives a really good sense of how much more complex things
are than they appear. This resonates with me because software
development is often about mitigating the effects of chance and
randomness; and that generally involves being exposed to a great deal of
complexity that most people tend to miss.
Siacoin - Summary of the upcoming Sia Hardfork @ block 139,000 on 21-Jan-2018
been seeing a lot of misinformation and confusion surrounding the
upcoming Sia hardfork, so I'm collecting the details in a single thread.
When will the hardfork occur?
The hardfork will occur at block 139,000.
You can check the current Sia block height at the official block explorer.
There is not an exact calendar time associated with the fork, but we
can estimate based on average block time. At the time of this writing,
there are 139,000 - 138,426 = 574 blocks until the hardfork. At ~30
minutes per block, the hardfork is on track to occur in ~12 days, on
SiaStats shows a countdown with the most up to date estimate of when the hardfork will occur.
Why do some sources say the hardfork will occur on Jan. 31st, 2018?
The original estimate was "around the end of January" so someone put
it in a crypto event calendar as Jan. 31st and everyone thought that was
the exact date.
Will the hardfork result in new coins?
Hardforks only result in new coins when the fork is contentious (i.e.
a separate group is supporting the old fork). This is an
uncontroversial hardfork and no development team has expressed interest
in supporting the old fork.
Will the hardfork break any ASICs?
No, this hardfork was announced in December 2017 and is unrelated to any discussion of a softfork to protect the network from misbehaving ASICs.
Didn't the hardfork occur already at block 135,000?
There are two separate hardforks.
The first one did occur already at block 135,000, on 2017-12-06. But that hardfork contained an unforeseen bug, which required Sia to undergo a second hardfork, which will occur at block 139,000.
Why are these hardforks occurring?
The Sia difficult adjustment algorithm needed a change to ensure that
when ASICs begin mining Siacoin in mid-2018, the massive change in
hashrate will not break Sia's consensus functionality. The first
hardfork (at 135,000) was an attempt to make this change, but it
included a bug, which necessitated a second hardfork (at 139,000).
What do I need to do to prepare for the hardfork?
If you run a Sia node (e.g. Sia-UI), upgrade to 1.3.1 or later.
But don't worry, even if you don't do this, your coins are not at risk. You can always upgrade later and recover your coins from your wallet seed.
Does it matter if my coins are in an exchange when the hardfork occurs?
No. Regardless of whether your coins are in an exchange or your local
wallet at the time of the hardfork, there will be no change to your
coins before or after the fork.
What will change after the hardfork occurs?
Block times will reduce from ~30 minutes to ~10 minutes. It currently
takes ~30 minutes for each transaction confirmation. After the
hardfork, confirmations will occur roughly every 10 minutes (the same
speed they were prior to the 135,000 hardfork).
For Siacoin miners, this will bring mining yields back to roughly the
same level they were before the first hardfork on 2017-12-06.
Navcoin - Our 2018 Roadmap is Live
Our 2018 Roadmap Is Live!
excited to be releasing our 2018 roadmap with you this week. The
NavCore team has been working hard to formulate a solid strategy and
clear vision for NavCoin. The release of our 2018 roadmap is the first
stage of sharing our wider vision. We feel this roadmap supports our
‘big picture’ mission of simplifying cryptocurrency, and forms a clear
path forward for the future.
part of this, we are hugely excited to formally introduce ‘Valence’,
our blockchain application platform. This in itself is a huge project,
and will feature many more projects than we’ve released in this initial
roadmap which will be published throughout the year.
anticipate that there will be a lot of questions, so we’ll be creating
and publishing content to help explain our roadmapped projects in more
depth. This content will be supported by the release of the Valence
Technical Whitepaper in the coming weeks. Our upcoming Strategic Plan
will also help shine a light on the direction we are taking. This is
just the beginning.
NavPay & The iOS App Store
like to address the confusion in the community that was generated by
our social post yesterday in regards to NavPay, Apple and the App store.
December, the NavPay App was rejected from the App Store due to some
minor technical feedback. We have since fixed all the technical issues
Apple had with the application and those have now passed review by the
App Store. There is currently only one outstanding item holding us back
from being listed in the App store — that NAV needs to be added to the
approved digital currency list.
are in talks with Apple about getting NAV whitelisted. There are
already quite a few digital currencies approved for iOS, so we are
hopeful that this will be a speedy and relatively easy process, but we
are unfortunately still at the mercy of the App Store until this
approval is secured.
we can get NAV approved for use on iOS, it means that NavPay can be
listed in the store, and any iOS app will be able to use NAV in their
app if they choose to adopt it. We are doing our best to make this
happen! So stay tuned for our progress with NAV approval.
NavPay is approved in the App Store, you can still use the NavPay web
wallet on iOS devices. For an optimum experience, we recommend using
week one of our senior developers, Alex, gave a lecture at the Beuth
University of Applied Sciences in Berlin. Alex presented to large group
of students on the basics of cryptocurrencies, NavCoin, and blockchain
technology. We believe education is hugely important in the
cryptocurrency industry, and are excited to have team members
contributing to the spread of knowledge on both Blockchain and NavCoin.
Getting the next generation of developers excited by the future of
blockchain is of benefit for everyone.
Team Processes & Scalability
so many big projects in the pipeline, we’ve been spending some time
working on designing and implementing scalable internal team processes.
Our goal is to create robust development, build and deployment practices
amongst the development teams while fostering a similarly considered
approach to our marketing, administration and business strategy teams.
We are in the process of engaging experts in each of these branches to
review our processes, and advise us on how we can improve them to keep
things running smoothly as we look to onboard more team members.
it from the NavCore team this week. We hope you all enjoy sharing in
our vision of where we are taking NavCoin and Valence in the coming
year. Exciting times ahead!
MyTrackNet, Viso and Likey will receive advice, PR, marketing and ICO
support, with future projects being allocated a total budget of 1
Lab, the incubator for new projects on the Waves platform, has
announced a total budget of 1 million WAVES for future participants, as
the first tranche of entrants launch their applications. The commitment,
worth approximately $9 million even after the recent cryptocurrency
sell-off across the board, places the Waves Lab on a long-term footing
and ensures that the coming years will see many more promising entrants
accepted to the programme.
are happy to be a part of Waves lab. Waves community is very strong and
this gives unfair advantage for those who build business with Waves.
Simdaq is very much community focused and it’s great to see Waves lab
that will help the project like ours. We look forward for future
cooperation”, says Evgeniy Dubovoy, CEO SIMDAQ.
The first set of participants include Simdaq, a social trading platform; MyTrackNet, a blockchain-based geo-tracking application; Viso, a hybrid e-payments/cryptocurrency payment platform; and Likey, a one-stop-shop for loyalty programmes.
market has a lack of professional approach as it’s a young and
dynamically developing sphere. We are happy to be part of Waves Lab and
will surely gain useful competencies which will bring benefit to our
project”, says Ilya Esterov, CEO Likey.
initiatives will receive advice, PR, marketing support and technical
assistance for their ICOs, as well as endorsement and contacts within
the wider Waves community, helping them to get off to a flying start.
The first pre-ICO is in process, with MyTrackNet already having
collected $530,000 with the help of the Waves Lab team and partners. The
pre-ICO will end on 30 January.
started MyTrackNet with a vision and passion but without the necessary
funding to build a big project like this. Waves Lab was a great help for
us, not only in terms of the funding but also for the lightning-fast
blockchain platform. Waves has almost instant transactions, a factor
which is very important to run a project like MyTrackNet. The people
behind the Waves platform are a major factor for our success, providing
their knowledge and help in the process of our pre-ICO and more
generally for our project”, says Dimitrios Moschos, CEO MyTrackNet.
transaction speed is of paramount importance in the financial sector.
VISO applied to Waves Lab, because Waves blockchain provides the
necessary technical support and the highest possible speed at this stage
of technology development. Besides, the Waves Lab incubator advises
blockchain projects on all issues related to business development on
blockchain and helps our project grow faster through its competencies.
Also, the residents are able to conduct early tests to adjust the app to
future releases, which is very important for a tech startup”, says Egor Petukhovsky, co-founder VISO.
Existing partners of Waves platform are excited about Waves Lab too.
incubator focused on cryptocurrencies-based startups is a great idea
because of Waves’ deep expertise of this market and its investors.
Traditional acceleration programs don’t have enough experience to work
with blockchain focused projects and communities just yet, in spite of
it being a great opportunity to receive fundings and develop business of
startups,” says Dasha Lyalin, COO of RAWG, a blockchain based service for gamers,
“So we are sure that it will be a great success for our partners and we
will be happy to have a synergy with residents of Waves Lab.”
For any inquiries or questions regarding Waves Lab please contact us by email: email@example.com.
MaidSafe Dev Update - Marketing, SAFE Authenticator & API & More
Here are some of the main things to highlight this week:
Yesterday, we released a new video40 that explains how the SAFE Network differs from blockchain-based solutions.The poll for choosing a proposal for the Safecoin Video Animation CEP13 is closing tomorrow. Forum users who are at trust level 12 and above can vote here.The Marketing team has created a new Medium publication (https://medium.com/safenetwork34) to collect content related to the SAFE Network in a single location.All
members of the SAFE Client Libs team will soon join the Routing team
(some have already joined) to help with Routing development.
New video released
As mentioned in last week’s update about the marketing plans for H1 20185, we are keen to share with the community the broad messages that we’ll be focusing on when talking about the project.
of these messages highlights our distinct approach which differs from,
and we believe improves upon, blockchain solutions. In order to support
this message, we released a new video yesterday40.
We’d be grateful if you could all share widely please across your own
social media channels. We need this message out there to build upon the
differentiation as the year moves on.
SAFE Network Primer
also identified that we have a challenge in directing people towards
resources. Just before Christmas, we received an early present from @jpl and @polpolrene
in the form of a lengthy document called ‘The SAFE Network Primer’. The
product of a huge amount of time and effort, this is a 30-page
introduction that can be shared freely with anyone who is new to the
project and looking for a summary. It’ll also be useful to those who
perhaps have been following the project for a while but don’t have the
time to forage around the Forum as they field questions from others. We
now have a final version of this document and we’ll be pushing it out to
the community any day now. Once again, we’d ask everyone to share it as
widely as possible. Thanks again to @jpl and @polpolrene for all their hard work.
Community Engagement Program update
seen a good deal of interaction in our CEP to create a Safecoin video.
We received four fantastic submissions, all quite different and the poll
will be closing tomorrow (Friday 19th). The team has tried to stay out
of the conversation on the Forum as far as possible in order to ensure
that the community isn’t influenced in any way and we’d encourage you to
vote if you haven’t already. We’re looking at a few changes for the
next CEP, including the funding mechanism given the backlog on the
network so we’ll keep you all updated.
New Medium publication
the years, there has been a variety of written commentary spread out
between blogs (individuals and the company) and on other websites. As an
experiment, we’ve now pulled together a Medium publication (https://medium.com/safenetwork34)
to collect the content in a single location. Medium brings with it an
additional virality that extends beyond our existing networks by tapping
into individual social networks (and also through regular email
updates). Moving forwards, we’d like to curate content about the Network
that’s posted on Medium (a good example is @goindeep’s recent posts). So if you create anything, please let @dugcampbell or @sarahpentland know.
Team meeting at HQ in April
MaidSafe employees are widely distributed around the globe, we’re keen
to bring the team together more regularly. To start this process, we’ve
arranged a weekend in April when everyone will descend on our Scottish
HQ so that we can brainstorm and build out a few strategies in person.
More on this to follow.
Finally, thanks for your support on the migration to r/safenetwork8.
We now have well over 1,000 readers and we’ll continue to focus on
growing this community each week. As a reminder, please support us by
subscribing to the subreddit - and also re-sharing any relevant articles
that you find elsewhere on r/safenetwork8.
SAFE Authenticator & API
We upgraded the system_uri library to the recently released v0.4.0 in safe_app_nodejs, and removed the libwinpthread.dll as a dynamic library dependency since it is now being statically linked in safe_app.dll. We also fixed some minor issues in the safe_app_nodejs documentation reported by @DaBrown95 (thanks, DaBrown95!). A new patch version of safe-node-app (v0.6.1) has been published on npmjs.com with these changes, and the updated documentation was also published at http://docs.maidsafe.net/safe_app_nodejs1.
is working on making sure the example apps can receive the
authorisation URI from the Authenticator even when they are launched in
dev mode. He has also been working on upgrading Node.js to v8.x on the
example apps as well as in safe_app_nodejs.
We are also working on adding a new function to the safe_app_nodejs API (already available in safe_app
lib) which allows the user to retrieve the list of containers, and
permissions granted for each of them, from an authorisation URI, i.e.
without the need to connect to the network to be able to retrieve such
just got back and he is fully focused on Peruse browser development.
We’ve had some initial design meetings to define the UI/UX to be
implemented in Peruse before we can officially release the first
version. In parallel to this, we keep fixing bugs and applying
enhancements to it.
The C# API has been progressing well. The
safe_app API implementation is ongoing and the authenticator bindings
are also being worked in parallel. Tests must be wired up to validate
the implementation before we start integrating it with the Xamarin
applications. As mentioned in last week’s dev update, improvements to
the C# API to make it more dev friendly have been implemented in the ongoing WIP branch1.
SAFE Client Libs
A couple of obscure bugs have been reported by the community2 member @jlpell
and the front-end team. We’re actively investigating and debugging both
of them: the first one concerns the app revocation procedure, which
previously had other issues which were fixed and covered with tests.
Now, the first look at it shows that this case is different, so we’re
trying to reconstruct the pre-conditions and the environment that lead
to this error. Another problem with SAFE Client Libs was discovered by @bochaco
and it causes memory errors and segmentation faults in some
circumstances. To get this fixed & covered, we’re also checking the
comprehensiveness of our test suite and looking for ways to extend and
After we get done with the fixes and the immediate
tasks, we’re planning to move the entire team to help with Routing
development. @adam has already joined it and @marcin and @nbaksalyar are currently catching up with the recent progress and documentation.
we’re also continuing with small improvements here and there. E.g.,
continuous integration build times started to get too long and
constantly failed with a timeout error which affected our workflow. We
split the build process into several stages which now run in parallel
and make build times more manageable.
Routing & Crust
Routing, we are still mostly fleshing out the flows for splits and
merges. A less complicated algorithm is now being aimed for. It is
anticipated that this refined approach shall cause less potential
issues, however, the edge cases still need to be considered to confirm
it’s viable, which is time-consuming and consists of many active
in-house discussions. Apart from that, work on the ageing simulation has
been restarted. We identified a number of issues that need to be
resolved, the biggest of which is making the simulated messages reflect
the proposal more closely. Current simulation code simplifies some
details of message flows a lot, which we suspect might be significant
for the results. The other issues are new features that we would like to
test, like using alternative relocation triggers or modifying some
minor details in the way peers age.
In Crust, we finally found out the issues with uTP connections. Actually, the problem was in the p2p crate with connection information exchange: it would simply timeout too soon - 20 seconds of being idle. For now the timeout value was extended,
but in the future, this could possibly be made configurable. The other
issue in Crust that made it so hard to debug this uTP problem was that
when p2p failed, it wouldn’t display an error.
Additionally, IGD was disabled for rendezvous/hole-punched connections.
That was redundant anyway because if we detect an IGD enabled device, we
can connect directly to it instead, which is more reliable. Also, we
did a fair amount of code refactoring, we fixed a lot of linting issues
and wrote more tests for both the p2p and crust
crates. All in all, during the past week we made a couple more steps
towards more robust peer-to-peer connections with Crust and it was
pretty fun to see peer-to-peer chat16 working - no intermediate servers, obscure cloud services, APIs, etc. just real, old-school networking