FP Complete Staff Docs
Welcome to FP Complete! This site is both your onboarding guide and a general reference for information at FP Complete.
Common links
- Start here — onboarding quickstart guide
- Google Drive shared folder — request access; not tied into Microsoft 365; includes meeting recordings.
- Docs repo — the source for this site, please feel free to make edits if you see places for improvement!
- Training slides — detailed technical content in the form of training slides
- Email/Calendar
- Slack — also, check out the "Common documents" folder in the
#generalchannel for additional helpful information. - Projector time sheet — for logging hours worked.
- FP Complete ChatGPT Account — highly recommended to get this set up and connected with your Outlook and Slack for a great search experience.
Start here: your first week
New to FP Complete? This page gives you the minimum steps to get productive fast. You can go deeper in the linked docs as you go.
If you get stuck
- Ask in Slack (default to
#engineersor a project channel). - Still blocked? Ping your manager or Michael.
Quickstart checklist
- Sign into Microsoft 365 at https://office.com and set backup email/phone plus passwordless auth.
- Install Slack (desktop + mobile). Join your project/customer channels and the engineering channels listed in engineering onboarding.
- Send Michael your GitHub username to join
fpco. Make sure you’re on the weekly engineering call and relevant customer meetings. - Log into ChatGPT at https://chatgpt.fpcomplete.com for Codex and better search than Slack AI.
- Confirm you can reach Projector and know which project to bill (see Projector).
- Read engineering onboarding and skim skills for how we work and learn.
Habits that help
- Enter time in Projector daily; submit everything by month end.
- Post concise updates when blocked or changing direction; after meetings, write down decisions so there is an artifact.
- Keep your calendar current with working hours and "Unavailable" blocks.
Engineer onboarding
Week 1 checklist
- Send Michael your GitHub username to join
fpcoand confirm you can push/PR. - Share a short bio with your manager to be shared on the
#generalchannel. - Join the weekly engineering call and introduce yourself to the team!
- Log into Projector and know which project to bill; start entering hours daily.
- Pick a small change (docs/infra/tests) and make a PR to ensure access and CI work.
Communication
- Team philosophy
- Text-based communication is key
- Asynchronous - scales well with large numbers of people in different timezones
- Recorded - can easily refer back to decisions later
- Clear - if well worded, easy to understand the intent and context in the future
- Voice/video calls are vital
- Higher bandwidth than text - more data can be shared
- Very useful for clearing up misunderstandings
- Video is generally recommended over pure audio
- We use both!
- Text-based communication is key
Document versus chat
We have two common ways of communicating in text:
- Using a chat app (Slack)
- Writing a document (Slack Canvas, Google Drive, etc)
Here are some general guidelines for knowing which one to use:
- Is this a short question/discussion that can be quickly resolved? Use a chat app.
- Do I want to produce a long-lived artifact from this discussion? Use a document.
- Do I anticipate a lot of edits and comments? Use a document.
Documents make it very easy for people to make suggestions
Which chat app to use?
Some customers will have special chat app needs. For example, many blockchain companies will use Telegram extensively for partner communication. Questions of when to use customer-specific apps should be discussed with the project lead. This section gives general advice for all FP Complete work.
For internal FP Complete communication, please use Slack. Engineers should join:
#general(automatically added) for general messages#engineersfor engineering discussions#randomif desired to discuss any random topic#beginner-questionsfor more basic engineering questions#marketingif you're interested in website and other marketing discussions#blockchainfor general blockchain discussions- Customer-specific channels depending on which projects you're working on
Video/voice calls
Microsoft Teams will be the default video conferencing app in Outlook. Feel free to use it if you're comfortable with it. If you have issues, don't hesitate to include alternative video conferencing, such as Google Meet.
There are two important notes about video/voice calls:
- Use them! It can be tempting to exclusively use text communication, but there are some topics that will simply work better in video/voice calls.
- Record outcomes! It's too common for people to have a meeting, make a decision, not write it down anywhere, and then either disagree on the outcome or forget the outcome entirely.
- It's tempting to record all meetings and use that as the "record," but it's too difficult to use as a source of truth.
- Instead: after every call, be sure to record decisions. That can be in text chat apps, a document, an issue tracker, etc.
- The rule is: A meeting with no artifact never happened!
Which doc system to use?
Simple recommendation:
- If you're writing the doc for yourself, use whatever you want.
- If you're going to ask for feedback from others, especially non-engineers, use Google Drive.
- Long-lived docs managed by engineers are usually best maintained in Git repos as Markdown files (like this site).
Communication style
- Polite, kind, and direct
- Never attack another person on the team!
- However, we should share honest opinions about code, approaches, etc.
- Always assume: everyone on the team wants to work together to create the best possible result
- If you doubt the previous point, speak with your manager and/or Michael
Source code repos
- FP Complete repos: https://github.com/fpco
- Most work happens on customer repos
- If you need access to the above, ask!
Time tracking
We use Projector as our time tracking system, both for billing customers, and for paying our own staff. All new team members need to have a session with Bilal on how to use Projector.
Availability
- FP Complete offers significant flexibility in your work hours
- Our team is spread out across many time zones
- Requirements
- Answer asynchronously in a prompt manner, usually within 1 business day
- Make yourself available for synchronous meetings with coworkers and customers at reasonable times
- Sometimes this may be outside normal working hours
- Actually show up for meetings!
- Sounds obvious, but has been an issue in the past
Time off
- If you intend to take time off, arrange it in advance
- Discuss with:
- Your manager
- Project leads you’re working with
- Bilal (if working on customer projects)
- Customers (after discussing with lead and Bilal)
- Update your calendar with “Unavailable” blocks for meeting scheduling
There will unfortunately be unexpected cases of illness and emegency. When this happens, as promptly as possible, please inform your manager and team members of your unavailability.
Deadlines
Without clear, well-defined deadlines, we hit common failure modes
- Blocked indefinitely on somebody’s feedback
- Spending more time on something than we should
- Should have clarity in what we mean with a deadline
- Importantly: what degree confidence do you have that this deadline will be achieved?
- Be clear about the scope of what we’re delivering
- Changes in requirements must be called out and impact estimated
- Always clarify the next steps, e.g., “we’ll write up the requirements document by Friday and then estimate the first phase of work”
Time boxing
- Set time boxes on tasks
- Far too easy for people to have wildly different ideas of the impact of a work item
- Can be a forcing function to find misunderstandings about the task
- Example timebox: spend up to 8 hours on task X, but if you get to 4 hours and think it will take longer, ping me to discuss before moving forward.
- Ideally, every task should have a timebox associated with it.
Projector
Projector is the tool we use for hours entry at FP Complete. This is an absolutely vital part of your job! Without hours entry, we cannot properly bill customers, track burn down of contracts, and in the case of people paid hourly, can't pay you. You should enter and submit your hours on a daily basis. Ideally, you should set a daily reminder and enter hours at the end of each day of work.
Hours must be entered minimally each week, and must be fully submitted at the end of each month.
Useful links:
Which project to bill time against?
Generally speaking, you should bill your time against whichever project you are working on. This includes time spent planning, writing emails, meetings with coworkers, etc.
When should I bill time against internal projects?
The X-FPCO client in Projector is used for internally billable work. This applies to things like:
- Approved training time that can't be billed against a customer (discussed below)
- Internal-only meetings, things like the weekly engineering call
- Sales and marketing activities that can't be assigned to a specific customer
Do I need to enter time for vacation, sick leave, etc?
It is not necessary to enter time off in your timesheet, though feel free to use the "Time off" entry if you find it helpful. What is important is, if you have scheduled time off, to request time off so that we know in advance that someone will be unavailable.
When is training customer-billable?
The general rule is: if there's a skill set that our customer should reasonably expect we already have when starting, the training time should be billed internally. Otherwise, it should be customer billable.
For example, if a customer has hired us to do Rust or DevOps work, and an engineer needs to learn the basics of Rust and DevOps, that time would be internally billable.
By contrast, learning specific details of the customer project, learning specific AWS services that the customer needs us to use, or exploring more advanced topics of Rust, would all be customer billable.
If you're not sure: ask your manager for guidance.
Onboarding and offboarding users checklist
This document is intended primarily for FP Complete administrators, not most team members. It covers engineer-specific onboarding and offboarding activities.
Onboarding
- Add all engineers to our github.com org
- Calendar
- Invite to FP Complete Weekly Engineering Meeting event (if an engineer).
Offboarding
This section includes a near-exhaustive list of places where credentials should be checked when offboarding a team member. Not all of these will be set up.
- Github.com
- Github internal org (
fpco-internal) - Gitlab.com
- Cloudflare
- AWS
- Remove AWS SSO user
- Remove SSH keypairs
- Google
- Google Analytics
- Google Search Console
- Google Tag Manager
- Google Marketing Platform
- Google Drive
- Look through project shared folders and remove user
- Remove from team shared drive
- ChatGPT
Introduction
This document lists the current preferred tech stack recommended by FP Complete for clients and customers.
The choices listed here are based on the engineering experience at FP Complete working with various clients.
- Cloud Environment
- Deployment Environment (Server-side)
- Deployment Environment (Client-side)
- DNS Provider
- IaC (Infrastructure as Code) tool
- Secret Management
- Programming Languages
- Nix/Docker
- Continuous Deployment
- Kubernetes YAML Management Tool
Cloud Environment
Usually, clients come with their own preferred cloud environment, so this is something we don't often have a say in. We at FP Complete are primarily an AWS-based shop, but we also have significant experience with Azure.
Alternatives:
- AWS
- Azure
- Google Cloud
Recommendation:
Although we have experience with multiple providers, we slightly prefer AWS over Azure due to its market share and our extensive in-house experience.
Deployment Environment (Server-side)
This section covers our preferred environment for deploying server-side applications.
Alternatives:
- Kubernetes
- ECS / Fargate (AWS) / Azure Containers (Azure)
- VM machine
- Serverless (AWS Lambda, Azure Functions etc)
Recommendation:
Amazon ECS
Reasons:
Historically, we used to prefer Kubernetes, but we have recently started using the vendor's native container solution more often because of:
- Lower costs
- Easier on-going maintenance.
For an architecture based on ECS, read this doc.
Deployment Environment (Client-side)
This refers to the deployment of your frontend application.
Alternatives:
- Cloudflare Pages
- AWS
- Vercel
Recommendation:
Cloudflare
Reasons:
- Low cost.
- Provides good support for Terraform.
DNS Provider
Alternatives:
- Cloudflare
- AWS Route 53
- GCP Cloud DNS
Recommendation:
Cloudflare
Reasons:
- Easier integration with other Cloudflare products like Zero Trust when needed.
- Provides easy and affordable DDoS protection. This is particularly relevant for our blockchain projects.
IaC (Infrastructure as Code) tool
Alternatives:
- Terraform
- Pulumi
- AWS CloudFormation
- Azure Resource Manager
Recommendation:
Terraform
Reasons:
- Its simpler language makes it easier to onboard and train new team members quickly.
- Terraform has providers for all major cloud platforms, including less common ones like Oracle.
- The community is active and provides a good number of high-quality modules.
We have our eye on Pulumi and will consider it for future projects. We generally avoid vendor-specific solutions like CloudFormation or ARM to prevent vendor lock-in.
Secret Management
Depending on the use case, we recommend various solutions:
- CI pipeline integration: amber
- Kubernetes manifests: sealed-secrets or external secrets operator if you want to integrate with a cloud provider's secret system.
- Storing in the cloud: AWS SecretManager / Azure KeyVault
- Sharing secrets with customers: Bitwarden
Programming Languages
We choose our programming languages based on the project's domain.
Server-side
For server-side applications, CLIs, etc., we are currently leaning towards Rust.
Client-side
For client-side web UI programming, we are leaning towards Typescript. We also have leptos on our radar and plan to use it for non-public-facing applications where applicable.
Nix/Docker
This section documents our choice of tool for containerizing application code with its dependencies. The primary tools for this are Docker, Podman, and Nix.
Note that this section only compares the pkgs.dockerTools functionality in nixpkgs, which is used for creating Docker images.
Alternatives:
- Docker
- Podman
- Nix
Recommendation:
Docker
Reasons:
- High market share.
- Easier to train and onboard new team members.
- Nix has a steep learning curve.
We might also want to use FPCo's pid1 docker image for proper reaping of orphan processes. Another better alternative is running health-check as the PID 1 process.
Continuous Deployment
This section applies when using Kubernetes.
Alternatives:
- ArgoCD
- Flux
- Spinnaker
Recommendation:
ArgoCD
Reasons:
- It is relatively simple to set up and has a reasonable web UI.
- It supports various tools (Helm, Kustomize, etc.) and allows for the integration of custom tools.
- The community is active and has been responsive to bug reports.
Kubernetes YAML Management Tool
This section applies when using Kubernetes.
Alternatives:
- Kustomize
- Helm
- Jsonnet
Recommendation:
Kustomize
Reasons:
- We have used Helm in the past, but found that using template directives to inject values can be fragile.
- Kustomize is an official tool sponsored by the Kubernetes CLI SIG.
- It makes patching Kubernetes resources for different environments straightforward.
Rust Best Practices
Evolving FP Complete recommendations document. Can include anything from recommended libraries and tools to how to use language features. If you think it's a good idea, add it! We can worry about organizing it later.
- Don't panic! Use errors
- Be liberal in what you accept
- Avoid references in structs
- Use an Rc or Arc when needed
- CLI: use clap
- Lock your repos
- Test your code
- Lint your code
- IDE
- Logging
- Public vs private fields
Don't panic! Use errors
- Custom app error type
- Use
?a lot - Learn to use
ok_errandmap_err - Crates helping with error handling
- Provide a user friendly Display for your errors
Context for errors
- Don't simply package up an
std::io::Errorand call it a day - Provide some additional information like "Was trying to open file XXX and received this error".
- Much more user friendly, much easier to debug
Be liberal in what you accept
- Prefer taking
&stroverStringin function arguments - Similarly, take
&[T]instead ofVec<T>. If you just need to iterate over something, consider accepting a more generalIntoIterator<Item = T>. - Avoids forcing someone to create an owned copy in many cases
Avoid references in structs
- References in
structs make you include lifetime parameters - Generally: avoid lifetime parameters if you can, simplifies code a lot
- Usually, you'll want a
String, not a&'a str, in yourstructs
Use an Rc or Arc when needed
- Many borrow checker issues can be short-circuited by throwing
RcorArcinto things - Don't worry too much about optimizing these cases
- Also, if it's not too expensive, consider a
.clone()as well - A good example of not doing this and wasting a lot of time: https://www.fpcomplete.com/blog/avoiding-duplicating-strings-rust/
CLI: use clap
clapis a really nice library- Forces you to deal with "impossible" error cases
- Less error handling in your own code
- amber has a nice example of it
Lock your repos
- Include
Cargo.lockand arust-toolchain - Makes it more reproducible
- If you use Git deps, do it by commit SHA
Test your code
- Obviously
- Consider using quickcheck, it's nice!
- For most unit tests, include within a
#[cfg(test)] mod tests { ... }section. - For integration tests and larger unit tests, place in a separate
.rsin thetestsdirectory.
Lint your code
cargo clippyandcargo fmtare good- Include them in CI
- Github Actions has nice stuff for that already
IDE
- Use Rust Analyzer as the language server.
- Visual Studio Code: the extensions
rust-lang.rustandmatklad.rust-analyzerare popular.
Logging
- Use the
tracingcrate for logging in all libraries and applications.
Public vs private fields
From a discussion in an engineering meeting on June 11, 2025.
When declaring fields on a struct in Rust, you need to determine the visibility: public, private, or something in between (like pub(crate) or pub(in crate::mod_tree)). This applies to functions, methods, enums, and more as well, but the question comes up most often for structs: should the fields be public or private?
There will always be some room for variation and disagreement about this, and it's almost always a case-by-case basis. However, the following general guidelines work as good defaults.
- There's a big difference between public, published libraries and libraries that will be used internally by our team and/or customers. The guidelines here apply to the internal variety. We're punting on published library guidelines for now.
- If there are invariants to be maintained within a
struct, such as ensuring that a field can never be 0, keep the field private. This is good encapsulation behavior. - If the
structis mostly simple data (i.e., the Rust equivalent of a Plain Old Java Object), defaulting to public fields makes sense. - More complex fields, such as
Mutexs, should probably default to being more private. - In general, using private fields can be a good way to protect consumers of a library from API breakage when internals change. However, when we're working in an internal capacity, it doesn't much matter if we break things in this way. Usually, the library author and application author are the same person, and regardless we're all on the same team anyway.
DevOps Architecture on AWS
This page documents the standard DevOps Architecture that we typically follow at FP Complete for projects hosted on Amazon Web Services (AWS). While the specifics are tailored for AWS, the core principles can be applied to other cloud providers.
The goal of this document is to provide a recommended foundation for deploying containerized applications.
Health Checks and Monitoring
Before deploying an application, ensure that the entrypoint of your Docker container is the health-check executable. This tool acts as a wrapper around your application, monitoring its health and reporting any crashes.
It is crucial to configure health-check with Slack notifications. This ensures that any application crashes are immediately reported to a designated Slack channel, allowing for rapid response. Typically, we use different Slack channels for different environments (e.g., testnet, mainnet).
If your team uses a different communication platform, you will need to add support for it in the health-check executable.
Amazon Application Load Balancer (ALB)
We use Amazon's Application Load Balancer (ALB) to receive traffic from the internet and route it to the appropriate backend applications running on ECS.
To optimize costs and simplify management, it is best practice to use a single ALB for your entire project. The ALB can use host-based or path-based routing rules to direct traffic to multiple distinct applications or services.
Amazon Elastic Container Service (ECS)
Our standard compute platform is Amazon Elastic Container Service (ECS) with AWS Fargate. Using Fargate provides a serverless experience for running containers, removing the need to provision and manage the underlying EC2 instances.
- Logging: We rely on Amazon CloudWatch Logs, which natively integrates with ECS for log collection and monitoring.
- Secrets Management: Application secrets should be passed securely as environment variables to the containers. These secrets should be managed and propagated from Terraform using the
ambertool.
Amber Secret Management
With this setup, you will need the amber tool to run terraform plan. Typically, you would do it like this:
amber exec -- terraform plan
Note that for this to work, you must export the AMBER_SECRET environment variable in your shell. The AMBER_SECRET variable should be shared using one of our recommended tools. We typically use Bitwarden, but some customers may prefer a different tool, such as 1Password.
Amazon RDS
For relational databases, we use Amazon Relational Database Service (RDS).
- Instance Sizing: The database instance size should be chosen based on a balance of cost and performance requirements. You can consult a resource like https://instances.vantage.sh/rds to compare options.
- Bastion Access: For administrative access to the RDS cluster, use an EC2 Instance Connect Endpoint. This is more secure than a traditional bastion host as it does not require managing SSH keys or leaving ports open in a security group.
Choosing Between Standard PostgreSQL vs. Aurora PostgreSQL
Note that Aurora PostgreSQL is AWS's closed-source fork of PostgreSQL, although it maintains wire compatibility with the open-source PostgreSQL database.
Our recommendation is as follows:
- Start with Standard PostgreSQL. It is generally more cost-effective and offers smaller instance sizes, making it ideal for initial deployments, development, and testing environments. For non-production environments, choose a burstable CPU type (e.g., T-series). For production environments, you would typically want a non-burstable CPU type for consistent performance under significant traffic.
- Consider Aurora PostgreSQL when your application has a write-heavy workload and you begin to hit the performance limitations of Standard PostgreSQL.
Cloudflare (Optional, but Recommended)
Using Cloudflare as a layer in front of the ALB is highly recommended, especially for applications expecting significant traffic or requiring enhanced security. It provides critical features like DDoS protection, a Web Application Firewall (WAF), CDN caching, and rate limiting.
When using Cloudflare, ensure the following configuration:
- Infrastructure as Code: Manage Cloudflare resources using the official Terraform Cloudflare provider.
- End-to-End Encryption: Import Cloudflare's origin certificate into AWS Certificate Manager (ACM) and attach it to the ALB listener. The SSL/TLS encryption mode in Cloudflare should be set to Full (Strict) to ensure a secure, end-to-end encrypted connection.
Additional useful Cloudflare features include:
- Cloudflare Zero Trust: For securing access to internal applications and environments.
- Cloudflare Health Checks: For monitoring application availability from the edge. This can be configured to raise alerts to your Slack (or any webhook) channels.
- PagerDuty Integration: For advanced incident response (Note: This is available only on Business plans).
CloudWatch Alarms
Make sure to set up CloudWatch alarms for key metrics, such as:
- Log Group Size: To prevent excessive logging costs, create an alarm for when a log group's size exceeds a daily threshold.
- ALB 5xx Errors: Monitor for an increase in server-side errors (HTTP 5xx status codes).
- High CPU Utilization: For RDS instances and ECS tasks.
- High Memory Utilization: For ECS tasks.
- RDS Storage: For non-Aurora instances, monitor for low free storage space to prevent outages.
Example Stack
The architecture described above is a proven stack we use for most of our clients. An example implementation of this architecture can be found in the devops directory of the kolme-rare-evo-demo repository.
This document is a collection of "start here" kind of links for common topics we encounter on our team. One of the aims of this document is to equip the engineer to get easily started with a specific domain. The domain covers everything that FP Complete works on i.e. Rust, Haskell, Kubernetes etc.
If you don't have access to any of the internal materials linked below, contact Michael or Sibi to get the relevant access.
Rust
- Michael's video course on Introduction to Rust
- Michael's video course on Intermediate Rust
- Rust book by Steve Klabnik and Carol Nichols.
- Begin Rust book by Michael and Miriam in PDF.
Terraform
- Practical guide to use terraform for different cloud vendors
- Official doc: Motivation behind terraform
Kubernetes
- Official Kubernetes tutorial
- Video course on Scaling Microservices with Kubernetes
Logging best practices
Generally, we follow the 12 factor app's recommendations on logging. A basic overview:
- Logs should be sent to either stdout or stderr.
- DevOps is responsible for capturing that output and ingesting into a logging system.
- Use best-in-class libraries per-language. In Rust, for example, we recommend using the
tracinglibrary. - What to log
- Anything that might potentially be useful to know
- Don't be afraid to add
debuglevel output - If you're not sure if you should log it, log it
- Limiting factor: logging should not be the bottleneck of the application
Haskell
- Michael's video course on Introduction to Haskell
- Michael's video course on Intermediate Haskell
Skills
Practical guidance on core team skills: self management, clear communication, expectations for senior/lead engineers, and what managers should cover in 1:1s. Skim the sections you need and add to it when you find gaps.
- Self Management
- Clear Communication
- Senior Engineers
- Manager responsibilities
- Writing
- Efficient Meetings
- Big head/little head
- Customer Communications and Relationship Management
- Push vs pull model
Self Management
This is a quick guide to staying organized in a calm, easy-to-manage way. It borrows from Getting Things Done without requiring any specific tool.
Principles
- Anything which must be done must have at least one next task. That task must have an owner. That owner is responsible for follow-through.
- If you do not trust the owner for follow through, you should keep track of the task as well.
- All tasks should be written down somewhere. It is OK if they are referenced in multiple locations, but there should be one master location for detailed notes.
- Each individual should have one central location for all tasks they must follow through on. This may include references/links to other locations.
- Each individual should train themselves to regularly create tasks in their own tracker, refer to their tracker, and check off tasks when completed.
- A tracker is separate from a calendar, which should be always up to date and indicate times for meetings and other highly-time-sensitive activities.
- Ideally, messaging systems (Slack notifications, emails, etc) should be kept clear. If a message or email necessitates handling, create a task for your follow up. Use “snooze” or similar features as necessary.
- Ultimately, the goal is that among your inbox, calendar, and personal tracker, there is a clear view of what you should be focused on at any given time.
- Make tasks achievable, well defined, complete them, and move on.
The tracker
Use any todo list that you will actually maintain. Todoist is one good option with sync, priorities, and comments.
There will almost certainly not be a central tracker for all work. We have personal lives. We have FP Complete issue trackers on Gitlab. There are separate customer issue trackers.
Do not let the perfect be the enemy of the good. This problem typically results in engineers seeking a way to “fix” the problem and unify everything, or create some automated system to show a unified dashboard. Don’t do that! Copy a link, stick it in your tracker, move on. The cost of doing anything else isn’t worth it.
Complete tasks
Make sure all tasks are actionable. “Improve code” isn’t a task, it’s a goal. A task is actionable if there’s a way to say “this has been completed.” “Provide a write-up on the improved code goals” is an action item.
If a task can’t be completed: close it. Open tasks are a distraction. Each one presents cognitive overhead.
If a task seems valuable, but there’s nothing actionable to be done, find some actionable subcomponent and turn that into an issue.
If you’d completed 70% of a task, but there’s more work to be done, close the open task, and create a new one focused on the pieces that remain.
You get a dopamine hit from completing a task. Use that!
Communicate clearly
If a meeting happens and nothing is written down, the meeting didn’t happen. Write down action items, ideally in a shared tracker. Ensure everyone is clear on who the owner is. If you are waiting for someone to do something, ensure you’ve clearly communicated that fact. All too often two people are silently waiting on others.
Some people are not good about following up on items they own. This especially applies to our customers, but sometimes to our team members too. Use your own tracker to schedule follow-ups on such topics. If you repeatedly have issues with someone dropping issues, raise it as a concern with them, and/or discuss with their manager.
Clear Communication
The ability to clearly communicate with others is a multiplier skill: it makes any other skill you possess more valuable. Together with self management, it forms the basis of leadership potential and a reliable team member.
A basic idea could be seen at The No Hello Club but that is just a bare minimum.
This document cannot teach all nuances of clear communication. Instead, it is intended to give a strong overview of goals you should strive for in improving your communication. It provides some techniques which, if followed, will help you communicate better. In addition to these points, you should ask for feedback and recommendations from team members, especially your manager, leads, and people you believe communicate effectively.
Identify goals
In any medium of communication, your first question should be: what are you attempting to achieve? Too often, engineers in particular have a tendency to communicate what they believe to be useful information, but in reality means nothing to the receiver.
Here are some examples of good goals for communication:
- I am having an issue with this technical challenge, and I would like assistance in resolving it.
- I have a concern about the customer’s expectations, and I need to raise the issue so that someone on the customer relationship team will address it.
- We are doing well on a project and expect to hit deadlines.
Here are some contrary examples of bad goals for communication:
- I want to tell you about what I’ve been working on this week.
- There may be a place for this in the sense of “I just want to share this with someone.” But this should be explicit. Providing a stream of details will likely confuse the listener as to what they need to try and glean from the discussion.
- I want to tell you that I’m concerned about the project.
- Unlike the good goals above, this one has no clear objective. Sharing the information is not the goal in and of itself. There should be some objective such as “please advise me on what to do next” or “can you double-check if my read on the situation is accurate?”
Concision
There are two competing goals here:
- Provide sufficient information to express the point
- Do not provide too much information to waste time or drown out the point
Concision is highly context specific. Consider the case of an async exception bug affecting a customer’s Haskell codebase. If you are two Haskell developers both familiar with the codebase, you will be able to communicate less information to each other than a developer explaining the situation to a non-technical customer relationship manager. With concision, the important points are:
- Attempt to identify how much context is necessary
- Confirm with the listener that they understand what you are saying
- Provide opportunities for the listener to tell you that they already understand these points
Communication media
We have many different forms of communication available to us:
- Slack
- Documents
- Voice chat
- Video chat
- Blog posts
Choosing the correct form of communication is important. In general, especially on a distributed team like ours, asynchronous and text based communication should be preferred. If something can be expressed in an email, Slack message, or document, it typically should be.
Video and voice chats place higher demands on synchronicity and time from participants. They also provide for faster feedback and more bandwidth of communication. They are certainly warranted in many cases. And in cases where text communication is leading to confusion or frustratration, a voice or video call is practically required. That said:
- Some people rely too heavily on video or voice chat to avoid the hard work of communicating clearly. It can be easier and lazier to take up 30 minutes in an open-ended video call that could have been achieved with 10 minutes of thinking about what to say, and 5 minutes of typing.
- A very negative side effect of video chat is often the lost meeting. When no notes or action items are taken from a video call, everyone walks away with a different understanding of the meeting. Or sometimes no memory of the meeting at all.
A basic guide to communication media is:
- Decisions and action items should be recorded in a document, issue in an issue tracker, or similar
- Exploratory discussions should start as a Slack thread, and result in something more concrete at the end
- If the topic is unwieldy on Slack, resort to a video chat
Private communications
The cornerstone of our ability to operate well as a globally distributed team is shared, asynchronous communication. Put simply: unless you have a strong reason to do otherwise, please communicate in topic-appropriate channels instead of private messages.
Many team members, especially newer team members, have a tendency to rely on one-on-one messages in Slack, or to wait for a call to discuss topics. People have different reasons for doing this, such as not wanting to ask "stupid" questions in a public channel. However, private communications greatly and negatively impact the team:
- It creates silos of knowledge that cannot be shared across the team
- It places unnecessary burden on the receiver of the messages to be the only person who can answer questions
- Due to the variety of timezones we're in, it causes delays in delivery
- Public discussions--especially technical discussions--are the cornerstone of building a company culture. It's a distributed team's equivalent of the water cooler.
Private communications do of course make sense in many cases: friendly chats, discussing personal issues, raising a concern that shouldn't be on the public record, and more. However, these cases tend to be far less prevalent than many people believe. Default to public unless there is a clear reason not to.
If you're unsure if something should be private or public, ask (and probably ask privately). If you receive a message that you believe should better be handled in a public channel, you can ask that the discussion be moved to a public channel (and a link to this section may be a helpful way to reinforce that).
Senior Engineers
This section describes what it means to be a senior engineer. Titles are lightweight; the goal is to set expectations and give concrete growth goals.
- Express to other team members what you can expect from a coworker
- Give concrete growth goals to team members looking to advance
Skill stacks
Your work relies on more than one single skill. For example, when working on a server-generated HTML web application, you will need technical skills including the server side language, HTML, CSS, Javascript, potentially SQL, some DevOps knowledge around deployment, etc. A common mistake in assessing someone’s skills is to assume all skills are equivalent.
As an example, it’s entirely feasible that an engineer may be very strong as Haskell, HTML, and SQL, but have weak skills on DevOps, CSS, and Javascript. As a result, it’s entirely possible for someone to be more senior in their abilities for some tasks than others.
In addition, all of the skills above are technical skills. There are many other non-technical skills related to your work: requirements gathering, clear communication, self organization, etc. A large part of being a senior engineer, and especially a lead engineer, is improving these soft skills in addition to technical skills.
Skill levels
Roughly speaking, for each individual skill, people can be classified into level 1 to 4. The basic breakdown of these levels is:
- Level 1 The person needs detailed guidance on how to complete the task
- Level 2 The person knows the basic outline of how to complete the task, but needs detailed oversight
- Level 3 The person has strong capabilities in the task, but may require some review of details, and lacks confidence for complete ownership
- Level 4 The person is an expert at this task, and requires little oversight or guidance
Leads and managers at FP Complete are trained to assess skill levels along these lines, and adapt leadership style for each person accordingly.
The final important piece worth reiterating: these skill levels may be different for each skill. In the example above, the same person may be a level 4 Haskell developers, and a level 2 Javascript developer. Care must be taken by both the person being managed and the manager to note this difference, and ensure there is appropriate guidance.
Skills expected of senior engineers
Senior engineers are expected to be at skill level 3 or 4 on the majority of skills related to their job. For example, a senior DevOps engineer should be at skill level 3 or 4 for Terraform, AWS, Kubernetes, and Docker. It is not necessary to be at skill level 3 for development tasks. The same would apply the other way: a senior Haskell engineer may be skill level 4 at Haskell, but skill level 1 or 2 with Terraform.
Senior engineers are expected to have the ability to self assess and identify weaknesses in skills. No person on the FP Complete team is expected to be an expert on all topics. However, each senior person must be capable (at skill level 4) of realizing when their skills on a topic are insufficient, and be able to ask for assistance from other team members.
Senior engineers are generally expected to have architect skills: design, plus documenting and communicating a solution. This involves requirements gathering as well.
In addition, the following skills are less technical, but no less important, for a senior engineer:
- Clear communication. Both written and verbal communication must be clear, concise, and comprehensive. See the clear communication section for more.
- Self management. Senior engineers must be capable of tracking their own work items, tasks assigned to others on their team, items they are blocked on customers for, meeting times, etc. See the self management section for more.
- Regular team tasks, such as status reporting, hours entry, security checks, etc. Senior engineers are expected to be fully autonomous on such activities, and require little to no assistance in the form of reminders to regularly perform such tasks. These tasks may be boring, but senior engineers must understand the need for such activities on a team, and be capable of taking responsibility for such activities.
Lead level skills
Engineering leads have additional responsibilities above senior engineers. While senior engineers are expected to own and complete tasks, lead engineers also:
- Identify the tasks that must be completed
- Break up tasks into meaningful milestones
- Assign tasks to team members based on skills
- Aggregate status across a team and update a customer
- Understand the deeper requirements coming from a customer and propose solutions
You’ll notice that little of the additional skill set here is technical. What distinguishes a lead engineer is stronger communication and coordination skills, rather than raw technical talent.
Manager responsibilities
NOTE This document is primarily targeted at managers, but all engineers will benefit from reading.
FP Complete generally follows a "matrix management" approach to engineer management. The general idea is that we split up responsibilities between:
- Project lead: responsible for prioritizing work items, answering questions about requirements, providing technical mentorship, etc.
- Manager: responsible for day to day guidance, development of soft skills, logistical questions, and growth trajectory.
Lead responsibilities are in many ways more clear-cut; this section is just about the manager aspect.
Generally speaking, managers should have a regular (once every 1-2 weeks) call with each person reporting to them. This is a good time to touch base on high-level concerns and questions (which we'll discuss below). But you should strongly encourage anyone reporting to you not to wait for a meeting to bring up questions. FP Complete is an async-heavy organization, and it's important to foster clear text-based, async communication skills. (See clear communication for more details.)
As a manager, your primary goals should be:
- Engineers must know what project they are supposed to be working on, and how many hours to be dedicating to that.
- As a manager, you should know, overall, what tasks they are being assigned on that project. You should be in regular communication with project leads to get an overview of this. You don't need to be familiar with all the details of every task, but should at least know the general requirements and the skills necessary to achieve that.
- Understand which technical skills the engineer is interested in acquiring. We can't always guarantee that projects will be available for training up on specific skills, but we try to tailor the workload to allow for skill acquisition.
- The general rule is: 80% of someone's work should be in technical areas they're already proficient with, and 20% stretch skills. This won't always be possible, but it's a good goal to keep in mind.
- Identify soft skills (improved communication, organization, etc.)
- Speak with project leads and other team members to understand the engineer's strengths and weaknesses.
- Provide feedback to the engineer about performance (more on that below).
Status updates
One of your most useful tools will be status updates. An easy failure mode as a manager is to not notice when an engineer is failing to make progress on a task. You should regularly ask for, either in writing or in a weekly call, a status update including the following:
- What have you accomplished since the last status update? This shouldn't be a laundry list of all work items, and shouldn't include things like "attended a meeting." These are demonstrable deliverables. "Worked on X, not yet completed" works here too. Generally, this should be a list of somewhere between 3 and 8 items.
- What you're planning on working on next. In an ideal world, everything from this list will end up on next week's "accomplished" list.
- Blockers: anything which blocks you from being able to complete your work. This could be "couldn't figure out how to do X," "didn't have permissions to access system Y," "blocked on feedback from person Z," etc.
Another important failure mode: blockers should generally not wait until a weekly call! There's a fine line to be walked between asking for help as soon as you hit any difficulties and stubbornly trying to hammer a square peg into a round hole. Be prepared to provide some feedback of "please spend more time trying to figure this out on your own." But you should help engineers reporting to you avoid a situation of identifying a blocker and then waiting multiple days before pointing it out.
Time boxing
Time boxing means to set a maximum amount of time to work on a task before touching base again. Time boxing has two valuable aspects to it:
- It prevents an engineer from spending unlimited time on a task.
- It's a communication tool between manager and engineer about how much time you think the task will take.
For example, without time boxing, imagine Alice (the manager) asks Bob (the engineer) to update the button on the homepage from red to green. Alice thinks this is a five minute task, but doesn't say anything about it. Bob decides the best implementation is to write a new AWS Lambda service which will generate a new version of the button's image using whatever color was specified in a query string parameter. Bob spends the next three weeks working on this, perfecting color gradient rules, optimizing the rendering, implementing a static file caching solution, etc. With great joy, Bob shows Alice the completed work. And Alice is horrified at the wasted time.
All that could have been avoided with a quick time boxing discussion, or at the very least an effort estimate. For example:
Alice: How much time do you think this task will take?
Bob: About three weeks.
Alice: Three weeks to change a button color? Why?
Bob: Well, we keep needing to change the images, so I figured I'd automate the process with an AWS Lambda service. I just need to spend some time learning Python first.
Alice: Let's just manually change the image this time.
When you timebox, follow these steps:
- Discuss the overall task
- Estimate how long both parties think the task will take, including "best case" and "worst case" analyses
- Set a timebox. This number doesn't necessarily need to be "time it will take to complete all the work." Instead, the timebox value could be "time before we should review work." It should never be longer than the worst case estimate, and is usually around or less than the best case estimate.
- After you've worked on the task for the given timebox time, touch base with your manager on your progress and discuss next steps.
Organization
An important aspect of FP Complete work is self managing. Have explicit discussions with your engineers about how to stay organized, especially for engineers who are new to remote work. Cover topics like:
- Where do you track your work items?
- Make sure to regularly check your company email.
- Set up notifications for messaging platforms (especially Slack).
- How do you get notified of meetings? Do you have calendar notifications enabled?
Performance review
We tend to be relatively lax about performance reviews at FP Complete. There is value in tracking performance and setting goals, so it's worth having at least informal discussions around performance. Some guidelines:
- When writing up performance reviews, be direct and personal.
- Good: "Your Rust coding skills are improving, but you're still having a hard time with understanding lifetimes."
- Bad: "The engineer demonstrates an overall proficiency in programming skills, with Rust being the primary target language. However, the engineer is still attempting to fully internalize usage of lifetimes."
- You should include both the good and the bad in performance reviews. Generally, the "shit sandwich" approach works best: provide positive feedback, negative feedback, and a positive tie-off. Example: "You've provided lots of valuable feedback on our team calls. Unfortunately, your attendance has been poor, missing at least half of the project meetings. Please work to ensure you show up on time to meetings going forward, your input has greatly improved the success of the project."
- Establish goals for the next period, usually 6-12 months. Goals should combine what the engineer wants to achieve with the needs of FP Complete. "Improve your skills with Kubernetes" would be a good starting point with a goal, though when possible defining something more concrete is best, e.g. "Demonstrate expertise with Kubernetes by setting up a new high availability deployment of the frobnicator application."
- Don't forget to include soft skills in performance. Engineers love working on improving technical skills, but often non-technical skills can make a much larger impact than you'd expect. Skill with writing, talking with customers, or making architecture diagrams would all fit in this.
Writing
This is a short guide and collection of tips for how to write blog posts and other kinds of marketing material for FP Complete.
How to write
Always identify, before you begin writing a blog post:
- Who is the target audience? You'll write a post quite differently if the audience is an experienced Haskeller versus a non-programmer
- What action you would ideally like this person to take.
- What this person should understand from reading the blog post.
- What this person should believe from reading the blog post.
From a business perspective, the goal of writing blog posts is to:
- Maximize the stream of people coming to our website
- Optimize the end state of those people when they finish with our website
Therefore, from a business perspective, we should judge the merit of a blog post on how many people we expect it realistically to attract, and the expected next actions for those people.
Blog posts should always include calls to action. The value of the blog post is judged on what we expect people to do, not the call to action itself. For example, if we have a call to action "give $1,000,000 to FP Complete for no reason at all," we have a low expectation that someone is going to do it.
Consult with the sales and marketing team to identify what business goals we should try to achieve with blog posts, based on current company objectives.
Efficient Meetings
With groups over 3, use live meetings for interactive discussion toward a clearly stated outcome, not for disseminating routine information. Keep attendee lists and meeting durations compact. Keep the meeting on topic and moving along. Send out information in advance rather than in the meeting. Be respectful of others’ time and logistics.
One of our strengths is our efficient, results-focused engineering culture. This includes efficient group communication and decision making. Much of our work is done asynchronously through documents, emails, instant messages, issue tickets, and shared repositories. Sometimes a live meeting with another person is the best way to make progress, whether in-person, by video, or by phone. We do these whenever helpful, and sometimes just to check in.
Larger meetings are different: their cost grows with duration and with number of attendees, and they become inefficient if not well organized. As our team has grown, meetings are more common, and should be run well. For any real-time meeting involving more than 3 people, effective immediately please follow these principles.
-
Disseminate information at least 24 hours in advance, not in the meeting. This allows time to read and consider. Meetings are for interactive discussion, problem solving, and decision – not for delivering fundamental information. (An exception may be made for very dense project management or scrum meetings, which are all about sharing fresh information in a highly structured and rapid manner.)
-
Arrive prepared. Check your email for notes you are expected to read in advance. Read them and gather your thoughts. Make notes if that helps. Send minor comments to the author rather than using up meeting time.
-
State the goal of the meeting. Declare in advance the decisions or creative outcomes that are needed by the end of the meeting, and that therefore drive the agenda of topics. Accomplishing the goal defines success for the meeting. A meeting whose goal is accomplished (or cannot be identified) should be ended (or canceled), the time used for other work tasks.
-
Choose a chairperson to run the meeting, keeping it on-topic and on-schedule. This person (the “chair” of the meeting) respectfully but clearly moves the discussion onward if an item is running long, is off-topic, is stuck on a less-important detail, or is ready to be delegated to a smaller group for post-meeting action.
-
State the duration of the meeting, usually an hour or less. Longer meetings may be broken into separate meetings, each with shorter attendee lists.
-
Keep the attendee list compact, freeing up as many as people as possible to do other work. Minor contributors (whose insights are not central to the meeting’s purpose) can send materials in advance, or answer short questions afterward, rather than having to attend the meeting. Also, attendees not needed for the remainder are free to leave.
-
Allow attendance by telecom, to avoid imposing time-consuming travel on your remote colleagues. For better communication use video where available, not just audio. Prefer in-person meetings for building relationships and for situations involving strong emotions.
-
Cancel unnecessary meetings, creating instead a document or online resource that delivers needed information and provides a place for colleagues to contribute feedback and ask questions. Often, most or all issues are resolved online and though 1-on-1 discussions.
-
Respond to meeting requests accurately, and let the organizer know if a time conflict arises. Rescheduling may disrupt a colleague’s schedule, so be considerate.
Meetings that might be longer or less organized include brainstorming sessions, social occasions, and off-topic but interesting lectures. These should be identified as such, and attendance is usually voluntary.
Our #1 goal is always to ensure project success. With well-organized meetings we can make well-informed decisions promptly, while making very efficient use of everyone’s time and skills. Use some of the time you save to have deeper conversations with one or two colleagues at a time, and of course to do your own personal work.
Big head/little head
These phrases (borrowed from Hebrew) refer to two different ways of relating to a task. “Big head” tries to see the large picture, adjust the specific task to meet the larger goals, and ask lots of clarifying questions. “Little head” does exactly what its told, without question.
There are times for both of these. But as smart engineers on the FP Complete team, we usually are expected to follow “big head” standards. This means that, when giving a task to another team member, you should give enough context for the team member to understand why he/she is performing that task. And when assigned a task, ask those clarifying questions, and propose alternatives as you see appropriate.
There is a cut-off point where there is too much analysis paralysis, and ultimately discussions need to halt. Also, there are times when there is significant time pressure, and we must simply jump into execution mode. Those cases should be exceptions, not the rule.
Customer Communications and Relationship Management
Most of the points and comments shared here are common practices and all of use these on a day-to-day basis. The purpose of the document is to refresh our day to day practices and allow us to be even better in day to day customer and relationship management.
-
Providing Cost and Time Estimates
It is best practice to not communicate any time or cost estimates to the customer if it relates to a new task, milestone or a project. Please gather all requirements to the best of your ability and arrange a short meeting slot with the Customer Success team to help create a proposal and / or agreement for the customer. Providing an estimate in writing or verbally to any customer can create customer relationship issues in future incase our estimates provided during a normal conversation are not accurate. -
Disagreement with the customer
If at any point a customer identifies a method, programming language or criteria which you don't agree with, politely advise that you'll check with the team and will get to them. Allow the customer to talk and share their feedback. At any point when it is imperative for you to share your disagreement with the customer, please provide your feedback and request or Customer Thoughts and feedback right away. It is a best practice that customer feels the ownership of what was originally. -
Stay in contact
It is best practice to stay in contact with the customer. If you are using an instant messaging tool like slack it is best to provide a summary of what was accomplished during the day. If you are working with a Lead engineer, it is best to provide a summary of accomplished work to the team lead so he can provide an update to the customer. -
ETA Management
It is always a best practice to provide an ETA to your clients for the task you are working on. Please be careful to add a buffer within your ETA as well stating that "provided no technical hurdles are encountered, my ETA will be.... (which should still include a buffer)" -
Customer Care
Customer Care 101: Use empathy statements to show you understand the customer's feelings or frustrations. -
Overreaction / Do not React
Never respond to angry comments. Allow the customer to voice their opinion and interject with helpful redirection when appropriate. -
Focus on the Goal
When a customer is upset for any reason, redirect the conversation back to the important issues and focus his attention on constructive solutions. -
Verbal Communication
Use words like "likely", "typically", "perhaps", "sometime", "possibly" or "occasionally" with customers who might not respond well to categorical words like "always" or "never". -
Agreement with the Customers
Find something to agree with the customer about. An agreement will result in collaboration and cooperation
Push vs pull model
FP Complete runs on a pull model. This model is collaborative, driven by ownership and focused on delivery. This means:
- Everyone should understand the broader goals, not just their individual tasks.
- Team members should take initiative to get what they need, instead of waiting for things to be handed to them with all details - be it information, context, support or new work.
- They should be proactive: ask questions, raise concerns early, propose ideas and keep things moving.
- While they are not expected to know everything, they are expected to clearly identify and manage dependencies. The task they own is their responsibility from start to finish, until it is formally handed off to someone else.
- Each person is a partner in the process, not just a task-taker. They help shape the work, co-own outcomes and contribute to the team's momentum.
- When this mindset is missing, it does not just appear as a lack of ownership, it impacts deliveries and productivity of the team.
- Lastly, team members are expected to make a reasonable effort to find answers before asking questions. Reaching out without any prior research shows a lack of initiative and creates unnecessary dependencies.
Blockchain
We had 2 engineering meetings where we discussed blockchain in general:
There are many different blockchains out there, and while they have a lot of similarities, the programming model for each is fairly distinct. For now, FP Complete is specializing in Cosmos development. Please see the dedicated Cosmos training docs for more information.
Blockchain architecture
This diagram is meant to be shared externally in some form as a marketing message around our abilities with blockchain. I've tried to make it more generic by not referring to specific service providers (like AWS) or chain-specific tools (like Rust or CosmWasm).
graph LR
subgraph "Chain"
node(Node Provider) -->|Broadcast transaction| validators(Chain Validators)
validators -->|Produce blocks| node
sc(Smart Contracts) -->|Modify| state(Chain State)
sc -->|Query| state
node -->|Execute| sc
node -->|Query| sc
end
subgraph "Team"
dev(fa:fa-user Developers)
scDeployer(fa:fa-user Smart Contract Deployer)
frontendDeployer(fa:fa-user Frontend Deployer)
sre(fa:fa-user Site Reliability Engineer)
end
subgraph "Code Management"
git -->|Trigger Build| ci(Continuous Integration)
ci -->|Push image| registry(Docker Registry)
ci -->|Compile| scArtifacts(Smart Contract storage)
end
dev -->|Push Code| git(Git Repo)
sre -->|Deploy Service| orchestration
orchestration -->|Pull image| registry
frontendDeployer -->|Manual frontend deploy| web
scDeployer -->|Download| scArtifacts
scDeployer -->|Deploy contracts| node
subgraph "Cloud Provider"
orchestration(Orchestration service)
indexerProcessor(Indexer event Processor)
subgraph indexerAsg[Indexer Auto-scaling Group]
indexerRest(RESTful API Server)
end
indexerProcessor -->|Store events| sql(Managed SQL Database)
indexerRest -->|Query| sql
subgraph queryAsg[Query Optimizer ASG]
querier(Query Optimizer)
end
orchestration -->|Update image| indexerAsg
orchestration -->|Update image| queryAsg
end
indexerProcessor -->|Load events| node
querier --> node
subgraph "Content Delivery Network (CDN)"
web(Web Application)
end
subgraph "fa:fa-user End User"
mobileWallet(fa:fa-wallet Mobile Wallet) --> web
mobileBrowser(fa:fa-mobile Mobile Browser) --> web
desktopBrowser(fa:fa-desktop Desktop Browser) --> web
end
web -->|Query| indexerRest
web -->|Query| querier
Cosmos
Cosmos is more of a blockchain ecosystem than a single chain. The premise of Cosmos is creating a network of interconnected blockchains, allowing for theoretically infinite scaling of capacity by adding more application-specific chains. The core of Cosmos is the Cosmos SDK, "A Framework for Building High Value Public Blockchains." Cosmos Hub, the home of the ATOM token, is the primary chain in the network, but most activity we care about occurs on other CosmWasm-enabled chains like Osmosis, Neutron, and Injective.
The Cosmos SDK is written in Go. It includes an optional add-on called CosmWasm, which is the most popular smart contract language in the Cosmos ecosystem. Many chains implement custom functionality in their own Go modules. For example, the Osmosis Dex is implemented in Go modules instead of smart contracts. The point of this being: while different Cosmos chains are largely the same, each may have specific functionality others do not have.
Cosmos is heavily based on gRPC and Protobuf. All messages that can be sent to the chain are encoded in protobufs.
Tooling
One issue when talking about tooling in the Cosmos ecosystem is that there's so much variety. For example, when it comes to blockchain explorers, the most popular is probably MintScan, which supports many Cosmos chains. However, individual chains may end up having their own modified explorers, such as Injective and Sei.
Cosmos provides a command line utility for performing many actions. Personally (written as Michael), I find that tool very confusing to use. It has very stateful management of keys, and the mental model has never really clicked. Instead, for command line access, we generally use the Cosmos CLI tool.
Cosmos provides a number of different protocols for talking to nodes, the primary of which are RPC (used by most tools and libraries, including cosmjs for in-browser interactions) and gRPC. The cosmos-rs library uses gRPC. You can generally find node endpoints on the Cosmos directory (or the testnet directory).
External doc links
- Official Cosmos docs
- Levana's public documentation covers Levana-specific information, but may be helpful in understanding how a Cosmos app works overall.
- Injective documentation
Mainnet, testnet, and local
Most chains provide both a mainnet (where real money lives) and testnet (fake money, data can be reset at any time). They also provide some kind of a local dev experience, usually based on Docker images. These local deployments are perfect for automated CI tests and doing local on-chain testing. As an example, check out LocalOsmosis.
Execution model
Transactions
The Cosmos execution model is linear and single-threaded. Like most chains, a Cosmos chain is a series of blocks starting at the genesis and counting up over time. Each block has a block height (1, 2, 3, etc) and block time. A block contains 0 or more transactions: signed messages that have paid some gas fee to perform something on-chain. Each transaction contains 1 or more messages, which are the actual actions to be performed.
There are many different message types in Cosmos, and individual chains may add their own message types. For example, Osmosis's Dex has a number of custom messages for performing swaps. One of the most versatile messages is around smart contract execution, which we'll discuss in CosmWasm.
Gas
Whenever you perform an action via a message, that action takes some amount of gas. Gas is a unit of measuring the execution cost of a message. Actions that perform more work on the CPU and perform more storage actions will end up using more gas.
NOTE A common source of confusion is confusing the gas amount with the gas fee. Think of the gas amount as the raw amount of compute power you need to perform an action, and the gas fee as paying the bill to the electric company for using that gas.
The common way to determine how much gas a transaction will use is to simulate it. Simulating a transaction asks a node to pretend to execute a transaction, determine the result (success or failure), capture any generated events and log messages, and provide information on how much gas was used. Unfortunately, there are a lot of bugs in Cosmos gas calculations, and so we generally add somewhere between a 30% or 50% buffer on the simulated gas amount to make sure we don't run out of gas while executing a transaction. (Most libraries perform this buffering automatically, and call it the "gas multiplier.")
Each chain has its own way of calculating the gas amount. Additionally, determining the gas fee is chain specific too. On some chains, there is a single coin type allowed to be used for paying gas fees, and it has a fixed rate per unit of gas (e.g. 0.0025ujuno per unit of gas). Other chains have more complex mechanisms, such as Osmosis providing a fee market that automatically increases and decreases the cost of gas based on network congestion.
Gas wanted vs gas used
When you construct a transaction, you need to provide two different values:
- The amount of gas wanted. This sets an upper bound on the total amount of gas your transaction is allowed to use. If the transaction tries to use more gas than that, your transaction will fail with an "out of gas" error (Cosmos SDK error code 11).
- The gas fee amount, which needs to be sufficient to cover the gas wanted. This amount should be
gas wanted * gas price. If you provide too little gas fee, the node will refuse to accept the transaction with an "insufficient fee" error (Cosmos SDK error code 13).- Separately, if you try to broadcast a transaction that uses more for the gas fee than your wallet actually has, you'll receive an "insufficient funds" error (Cosmos SDK error code 5).
Once the transaction lands in a block, in addition to the "gas wanted" value, we'll have a "gas used" value which says how much gas was actually used in practice. In theory, this should be very close to the simulated gas amount. But due to potential on-chain data changes between simulation and real execution, and bugs in the Cosmos SDK, the numbers may end up being significantly different.
Data storage
The chain itself maintains a data storage layer that can be considered the current state of the chain. For example, that data storage layer contains a mapping between every known wallet and the token balances present for that wallet. Generally, that data storage layer is always about the latest block height, though it's possible to perform historical queries to look up historical data.
Note that this data storage layer is separate from the full history of blocks in a chain. A Cosmos node essentially does the following:
- Start with an empty data storage layer
- Receive blocks from the rest of the network
- Execute the transctions in each block sequentially
- Each execution will either fail (in which case no changes happen to the data storage layer), or succeed, updating the data storage
An interesting side-effect of the above is that nodes are allowed to prune their history, not storing historical block data. As long as a node has the full up-to-date data storage information, it can answer queries about the state of the chain.
Queries
In addition to sending transactions and messages in blocks to make modifications to the chain, you can perform queries to look up information. These kinds of queries require no authentication and do not impact the chain. One thing worth mentioning though: queries--and especially smart contract queries--still calculate how much gas is used to perform the query, and nodes will have a hard-coded gas limit for queries to avoid DoS attacks. The default is 300,000 gas. An upshot of this is that, when designing smart contract query APIs, you need to ensure that queries can reliably complete in under that gas cap, which essentially means you need to use only O(1) operations in smart contract code.
Error handling
Each transaction in a block will either succeed or fail. Success means that every message within the transaction succeeded. In those cases, changes from the transaction will actually update the chain. By contrast, if any of the messages in a transaction fail, the entire transaction will fail, and no changes will be written to the chain. This is fairly similar to ACID guarantees in databases (commit vs rollback), and in fact greatly simplifies the programming model around smart contracts on Cosmos.
Note that smart contracts are allowed to create something called submessages, which are allowed to fail without aborting the transaction overall. But that's a more advanced usage.
Transaction lifecycle
Let's walk through the process of getting a transaction on-chain. It goes something like this:
- Application constructs a set of messages it wants to send on-chain.
- Application constructs a fake transaction containing those messages. By fake, I mean that it does not need to include real signatures, can request absurd gas amounts, etc.
- Application contacts a node and performs a simulate query to determine the result of running the transaction.
- Generally, if the transaction failed, the application will report the error and stop, since it's usually not helpful to get a failing transaction on chain. However, technically speaking, an application is free to ignore the error result.
- Using the simulate response value, the application constructs the real transaction, including a real gas amount based on the simulated gas (including the 30% buffer mentioned above), a gas fee amount (using whatever calculations are relevant for the chain in question for calculating that), and a real cryptographic signature proving that the wallet owner is trying to perform these actions.
- Application broadcasts the transaction to a node. After that, the application will generally repeatedly query the node every 100ms to check if the transaction (identified by the txhash, or transaction hash) has been included in a block yet. In the meanwhile, the node continues operation.
- The node will store the new transaction in its mempool as an unconfirmed transaction.
- Using Cosmos's peer-to-peer protocol, the node will broadcast the transaction to other nodes in the chain.
- Each block in a chain is constructed by a proposer, which will take as many transactions from its mempool as possible and try to construct a block from them. It will then propose that block to the rest of the validator nodes.
- The validator nodes see the proposed block, determine if it's valid (signatures match, no invariants of the chain are violated, etc.), and assuming there is consensus, the block is accepted as the next block in the chain.
- When this happens, the transaction is now part of a block, and so the applications polling in step (6) will now complete successfully.
- Note that, even if a transaction lands in a block, it may still be an error! And even if simulating a transaction succeeded, actual execution could still fail for a number of reasons (chain state changed since simulation, gas estimate was wrong, etc).
CosmWasm
Please make sure you're familiar with the Cosmos execution model before reading this page. Also, the CosmWasm book is the official resource for CosmWasm, and will include information not contained in this page.
The Cosmos blockchain ecosystem is modular, allowing different submodules to be enabled or disabled for an individual chain. CosmWasm is one of the most common add-ons. It is a smart contract language built on WASM and Rust. Some highlights:
- CosmWasm (and Cosmos overall) is a single-threaded execution model.
- Contract execution is either success or failure. A failed execution will not write any changes to the chain state. Think of this like a database rollback.
- Contracts are able to query other Cosmos modules, other contracts, raw storage, and more.
- Contracts are able to spawn submessages. The contract can determine whether a failing submessage will cause the entire transaction to fail or not.
- Contracts are (generally) written in Rust, compiled to WASM, and then uploaded to a chain. When you upload, you get a code ID for that upload, and
- Every contract has its own data storage. This is a key/value store of arbitrary binary data (though in storage we'll discuss common techniques for easing storage interaction). Only a contract can write to its own storage, though other tools and contracts can query the data storage using a raw query.
- Contracts have entrypoints, which is how you interact with them. The most common entrypoints are:
- Instantiate: used when instantiating a code ID into a live contract. The result of instantiation will be a fresh contract with its own contract address.
- Query (aka smart query): perform a read-only query against the contract.
- Execute: perform an action on the contract. This is open to all accounts, though authentication can be performed within the contract itself.
- Migrate: each contract can optionally have a chain-level admin that is capable of performing a migration. Migrations can be used to move to a new code ID, and can also optionally run arbitrary code during a migration to--for example--update data in the contract.
- Reply: used for handling callbacks for running submessages. For example, a contract may emit a submessage to transfer funds from the contract to a user wallet (e.g., when collecting staking rewards). Optionally, you can specify that after running the submessage, the reply entrypoint of your contract should be called with the execution result. This can allow you to do things like handling insufficient funds without just aborting the entire transaction.
- Interactions with contract entrypoints is (virtually always) done via JSON requests and responses. Rust's serde library is heavily used by CosmWasm for making it easy to generate these message types. It's common to separate out message types to their own Rust crate and generate JSON schema files for use by external tools.
Scaffold
There are lots of different ways of structuring CosmWasm contracts. The only required bit is that the entrypoints need to be public, plus some rules around setting up the library crate with proper wasm config. You can check out a simple starter than includes a test framework at:
https://github.com/snoyberg/cosmwasm-starter
Some comments:
- You'll need to include
crate-type = ["cdylib"]in theCargo.toml'slibsection. - Most projects seem to use
thiserrorfor error handling, thoughanyhowworks well too. Even though our general advice is to useanyhowfor application error handling, in the case of smart contracts usingthiserrorcan be preferable so that external tools can display custom error messages. - Execute and query message types are generally
enums, while other entry points usestructs.
Cosmos vs contract message types
A complication around messages is that, with CosmWasm, you have two layers of "messages." And, for that matter, two layers of "queries." Let's clarify.
Cosmos chains make extensive use of protobufs and gRPC to define their messages and queries, as well as the services that support those queries. I'll use the generated Rust docs for demonstrating those since I'm most familiar with those.
When you want to upload a new contract to the blockchain, you need to do what's called "store code." This is a message called MsgStoreCode. This message is part of the CosmWasm module of Cosmos, and can be included in a transaction just like other messages (like MsgSend). At this point, there's only one layer of messages: Cosmos chain messages.
Once you store code, you'll get back a code ID. Now you'll want to instantiate the contract. To do this, you'll need to send a MsgInstantiateContract message. This data structure includes the code_id from the store code action. But it also contains a msg: Vec<u8> field, which is the encapsulated contract-recognized JSON message for the instantiate entrypoint. This is where the two layers come into play:
- The Cosmos chain itself sees the
MsgInstantiateContract. It then grabs themsgfield from that, and then... - The chain will run your smart contract WASM code, providing it the
msgfield (and other metadata). The contract is responsible for parsing the JSON into the correct data type and then processing the instantiation.
The same logic applies to execution: you send a MsgExecuteContract and include a msg field within it.
Queries are different, since they don't perform any actions and are not included in a transaction. Instead of having messages, we have QuerySmartContractStateRequest. This can be sent to a node without signing a transaction, and includes the JSON message sent to the contract code in the query_data field.
So, in sum:
- Each smart contract defines its own entry points and the message types they accept.
- These messages get wrapped up inside Cosmos messages and queries designed for handling smart contracts.
Entrypoint example
Here's an example of a real life entrypoint. This is an execute entry point from a contract which handles "secure transfers": it ensures that funds are transferred to each wallet only one time. It's intended to help with airdrops and avoid accidentally double-funding people, which can happen if you accidentally rerun a normal MsgSend. Instead, by tracking everything in the smart contract, the smart contract can be responsible for rejecting any attempts to airdrop to the same wallet twice.
#![allow(unused)] fn main() { #[entry_point] pub fn execute(deps: DepsMut, env: Env, info: MessageInfo, msg: ExecuteMsg) -> Result<Response> { let admin = ADMIN.load(deps.storage)?; if info.sender != admin { return Err(Error::NotTheAdmin { admin, sender: info.sender, }); } match msg { ExecuteMsg::Transfer { recipients, ident } => { check_ident(deps.storage, &ident)?; transfer(deps, recipients) } ExecuteMsg::Retrieve { ident } => { check_ident(deps.storage, &ident)?; retrieve(deps, admin, env) } } } fn check_ident(store: &dyn Storage, ident: &str) -> Result<()> { let actual_ident = IDENT.load(store)?; if ident == actual_ident { Ok(()) } else { Err(Error::MismatchedIdent { provided: ident.to_owned(), actual_ident, }) } } const RECIPIENTS: Map<Addr, Vec<Coin>> = Map::new("recipients"); fn transfer(deps: DepsMut, recipients: Vec<Recipient>) -> Result<Response> { let mut res = Response::new(); for Recipient { addr, coins } in recipients { let addr = deps.api.addr_validate(&addr)?; if let Some(coins) = RECIPIENTS.may_load(deps.storage, addr.clone())? { return Err(Error::AlreadyTransferedTo { addr, coins }); } RECIPIENTS.save(deps.storage, addr.clone(), &coins)?; res = res.add_message(BankMsg::Send { to_address: addr.into_string(), amount: coins, }); } Ok(res) } }
Some highlights to pay attention to in the code above:
- The storage mechanism we use is cw-storage-plus, discussed below.
- Entrypoints use the
#[entry_point]attribute macro to generate some boilerplate code. - Note that the execute entrypoint performs admin checking inside of it, following a common pattern of erroring out if the user has no permissions. Remember that if a contract exits with an error, all storage changes it may have performed will be wiped out.
- The
executeendpoint receives some parameters that help with processing the data. A rundown:- The
msg: ExecuteMsgalways contains our locally defined data type for execute messages. The#[entry_point]macro generates code for parsing and validating the raw bytes sent into the chain. info: MessageInfocontains information on the user that submitted the transaction and any funds they sent along with it.env: Envcontains information on the status of the blockchain (block height and time) and the contract that is being executed.deps: DepsMutprovides three fields:querier: QuerierWrapperis used for querying the chain for things like token balances.api: &dyn Apiis primarily used for validating that wallet addresses are valid.storage: &mut dyn Storageprovides the ability to read from and write to the contract's storage.- Note that, in addition to
DepsMut, there's also aDepsdata type. This is used in the query entrypoint, and provides a read-onlystorage: &dyn Storagefield instead, which provides a type-safe way to ensure that queries can't write to storage.
- The
Common libraries
- cosmwasm-std is the core library for CosmWasm, providing helper functions and data types for many common and primitives operations.
- cw-storage-plus is the de facto standard storage library, providing a nice abstraction over the
Storagetrait for handling things like singleton, maps, and more. I wouldn't describe this library as stellar, but it's Good Enough, and sticking to it simplifies code and helps us avoid corner cases. - Error handling is pretty standard, using one of the following commonly used libraries.
- cw-multi-test is great for writing unit and integration tests. It will simulate a chain environment without needing to run a full chain. We typically call tests using
cw-multi-test"off-chain tests." It's a good idea for important projects to also have on-chain testing, where you deploy the contract to the chain and run a series of actions. Usually you would use something like local Osmosis.
Storage
Generally stick to cw-storage-plus for storage.
Smart vs raw queries
CosmWasm smart contracts are a combination of code and storage. Most of the time we interact with smart contracts, it’s through an entrypoint. The two most common entrypoints are execute and query. execute is performed in a transaction and lands on-chain, requires gas and a wallet to send it from, and can make arbitrary changes to storage. query, or better named right now “smart query,” is a read-only message that runs the contract code on some input JSON message and gives back a JSON response. It’s a “smart” query because (1) it’s a smart contract and (2) it performs a smart piece of logic embedded in the contract.
However, there’s one other way to query the storage of a contract: a raw query. Everything that gets stored in contract storage does so through a simple key/value store, where the keys and values are both arbitrary binary blobs. Internally to the smart contract, if you dig deep enough through the helper libraries, you’ll eventually find code that directly interacts with this key/value store.
However, you can also do a raw query from outside of the contract. This can be useful primarily for two reasons:
- Get access to some data that’s not exposed through a smart query.
- Reduce gas costs: a smart query requires running the blockchain’s VM system, which is relatively expensive. Raw queries are significantly cheaper.
Backend Rust development
Most blockchain applications involve some kind of off-chain, non-frontend work. This can be API servers, bots, indexers, and data aggregation tools. As our primary language at FP Complete is Rust, we've built up a library for performing these actions in Rust:
https://github.com/fpco/cosmos-rs
This repo contains both a Rust library for communicating with Cosmos chains over gRPC, as well as a command line tool for performing common operations. They support both standard Cosmos chains, as well as Injective (see Frontend development and Wallets and keys for details of differences).
Hopefully the generated API docs and CLI --help comments are clear enough that further docs here aren't necessary. That said, if team members run into issues, please help improve the docs here and in the repo.
Frontend development
When doing frontend development on Cosmos, you need to handle two things:
- Handle the connections to different wallet providers (like Keplr, Leap, etc.). This is used to propose transactions for the user to sign.
- Communication with the blockchain, either for queries or for broadcasting transactions.
In many of our projects, we handle most of (2) by writing a backend service that provides a REST API instead of the frontend directly communicating with the chain. This allows for caching, auto-failover to other nodes, batching requests, and more. But for purposes of this document, we'll ignore that approach.
Which set of libraries you use for the above depends on which chain you're working with.
Most Cosmos chains
Most chains in the Cosmos ecosystem use a similar set of settings for wallets and signing, as well as the blockchain API. For those chains, the de facto standard libraries are:
- cosmjs is the core library for communicating with Cosmos chains over RPC. Note that it uses RPC, not gRPC, so it's compatible with browser limitations.
- Cosmos Kit provides an intermediate API between the various Cosmos-compatible wallets and your application. In theory, by using Cosmos Kit, your application can trivially support every Cosmos wallet out there.
The reality about both of these libraries, and Cosmos Kit in particular, is that they are poorly documented, difficult to work with, and sometimes straight up buggy. We still use them because it's the best bad choice we have. For backend code, we prefer using cosmos-rs, though that isn't an option in the browser.
Over time, we should flesh out this page with more tips and tricks for frontend dev with these two libraries. For now, the best recommendation is to ask someone on the team for codebases to look at that demonstrate how to use these libraries.
Injective
Injective is a Cosmos chain that works pretty differently from others. It uses Ethereum's key/wallet management and signing systems to remain compatible with that ecosystem, but it is therefore incompatible with most other Cosmos chains. Additionally, some of the APIs used are different, such as providing an Ethereum address from the account information query, which creates complications in our code base sometimes.
It's not that vital to understand all the low-level differences. The high level point is that, when working with Injective, you'll want to use libraries that are geared towards supporting this. You can check out Injective's TypeScript docs, which point to the following replacement libraries:
@injectivelabs/sdk-ts@injectivelabs/wallet-ts
CW3 multisigs
Blockchain actions are generally authenticated by using a single cryptographic signature, either from a hot wallet (something on a computer with an internet connection, aka software wallet) or a cold wallet (something on a device like a Ledger or Trezor, aka a hardware wallet). Hot wallets are very convenient, but run the risk of compromise of the key due to user error, wallet bugs, system compromise, etc. Cold wallets are more secure, but ultimately leave all decision making power in the hands of a single person.
If you're using blockchain for maintaining your own, personal funds, this often makes sense. But for companies and projects that want to maintain a treasury or control the administration of their smart contracts, trusting a single person is dangerous. Users can be hacked through social engineering, themselves be nefarious, or fall victim to the five-dollar-wrench attack. To combat that, many projects use multiple signatures--aka multisig--to control their treasuries and secure their contracts.
One approach to multisig is to use something like Cosmos native multisig. Many chains include some kind of native mechanism for this. However, in practice, this is often cumbersome to work with. In Cosmos in particular, this usually requires working with command line tools and sharing messages manually for signing.
Another common approach, and the one we tend towards, is using a CW3 multisig.
What is CW3
CW3 is the third CosmWasm standard, documented in the cw-plus repo. It defines a standard for a contract, which you can think of as an API. It also provides concrete implementations, which are generally what people end up using.
CW3 is a smart contract implementation of multiple signatures. All activities occur on the blockchain, and do not require passing and signing messages separately. Additionally, it's relatively easy to provide a nice web interface for it. As long as your chain supports CosmWasm, it's our recommended choice.
You can check out the ExecuteMsg and QueryMsg of CW3 to see more details of how it works.
Signer count
Each CW3 has a configuration of who can sign and how many signatures are needed for a proposal to pass. The most common, and simplest, are configurations like "3 of 5", meaning that there are 5 signers allowed on a contract and 3 of them are required to pass a proposal. Contracts can be as customized as you'd like though. You can have rules like "I need to have 1 vote from group A and two votes from group B," or any other configuration you can think of.
In practice, all wallets we've set up have been one of these simple approaches. 2 of 3 is considered the minimal multisig, and 3 of 5 is probably the most common.
Fixed vs flex
The cw-plus repo contains two basic approaches to setting up a CW3:
- Fixed: when you create the contract, you give it a hard-coded list of members and their voting power. After that, they can never be changed.
- Flex: first you create a CW4 contract, which maintains a group of users, and then create a CW3-flex that refers to that CW4.
Both contracts have their place, but we've been leaning towards flex. Our cosmos-rs CLI tool provides a cw3 subcommand that does the following:
- Creates a new CW4
- Creates a new CW3-flex that refers to that CW4
- Updates the CW4 to use the new CW3-flex as its admin
In this way, the members of the contract can self-manage adding and removing members as needed.
Proposal process
The process for a proposal is:
- One of the members creates a new proposal. This will include a title and description, as well as a list of messages. You can do anything from sending coins to executing smart contract messages.
- Other members can vote yes, no, or abstain on the proposal.
- If enough people vote yes, the proposal is passed. At that point, anyone (even non-members) can execute the proposal.
Web interface
Manually interacting with a CW3 contract is tedious. You need to query the list of proposals, query each proposal for its vote status, and manually construct messages for proposing, voting, and executing. Instead, it's common to use a web interface. We've set up our own website for doing this: FP Block Cosmos GUI.
Tokens: CW20 vs tokenfactory
The Ethereum blockchain was the first one to introduce the concept of tokens. Within Ethereum, the ETH coin is the native gas coin of the chain, and you can natively (meaning: a function of the chain itself) send ETH to other wallets, to smart contracts, and even include some ETH when executing smart contracts. However, people wanted to be able to create new assets without spinning up a new chain.
To allow for this, the Ethereum ecosystem invented ERC20, a standard for tokens. This is a smart contract API, and any smart contract that provides the necessary functionality can be considered its own token. There are thousands of such tokens available on Ethereum.
In the Cosmos ecosystem, we have a related standard: CW20. This is intended to do the same thing as ERC20, but based on Cosmos smart contracts. Both standards allow for sending tokens, checking balances, and executing smart contract messages with attached funds. However, in both ecosystems, it's not as natural or as cheap (from a gas perspective) as interacting with the native coins.
A number of chains in the Cosmos ecosystem have been moving away from CW20. Instead, there are two common ways to deal with other assets these days:
- Using Inter Blockchain Communication (IBC), you can bridge a token from one chain to another.
- Some chains provide a token factory, which allows you to create a new native coin on that chain.
On the Osmosis chain, for example, you can trade ATOM. The ATOM you'll be trading is bridged via IBC to Osmosis. Also on Osmosis, the Levana team created a new LVN token using token factory, so that its native location is Osmosis.
For the most part, CW20s are dying off now, so you should focus on native coins, IBC, and token factory. Each of these is identified by a denom or denomination, a string that uniquely identifies each asset on that chain. You can see a list of Osmosis assets. Some examples:
uosmois the native Osmosis coin. Theuat the beginning means "micro," so that 1,000,000 uosmo is one OSMO coin.ibc/27394FB092D2ECCD56123C74F36E4C1F926001CEADA9CA97EA622B25F41E5EB2is the IBC-bridged ATOM coin on Osmosis.factory/osmo1mlng7pz4pnyxtpq0akfwall37czyk9lukaucsrn30ameplhhshtqdvfm5c/ulvnis the Levana token created via token factory.
With native tokens, including IBC and token factory, you can send the coins with a normal MsgSend message, query the balance using native queries, and attach the funds to any smart contract execution.
Wallets and keys
The blockchain world is heavily based on private key cryptography. Operations you perform on chain generally need to be signed using a private key and will be verified against (some version of) the corresponding public key. But the details are confusing, and the terminology can be misleading. Let's break down the process from start to finish.
Cryptographic primitives
Private key cryptography allows for two pairs of operations:
- Encrypting a message using a public key, and decrypting it using the corresponding private key.
- Signing a message using a private key, and validating the signature using the corresponding public key.
Encryption is very rarely used in blockchains. Instead, signing of messages is the primary feature we use from private key cryptography. There are lots of different algorithms out there, but a bit of pseudocode will explain the high level concepts pretty well:
myPrivateKey := something // we'll explain where this comes from below
myPublicKey := derivePublicKey(myPrivateKey)
someMessage := b"deadbeef" // any arbitrary binary payload
signature := signMessage(someMessage, myPrivateKey)
isValid := validateSignature(someMessage, signature, myPublicKey)
The point here is that you can safely share your public key with the rest of the world, and they cannot figure out what the corresponding private key is (at least before the heat death of the universe, assuming the cryptography is well designed). Then, using my private key, I can prove that only someone who controls that private key sent a message. And anyone in the world can validate it.
For the blockchain, this is the basis for signing transactions and sending funds. If I have 2 BTC, and I want to send 1 BTC to Alice, I can sign a transaction using my private key, and the network will know that the owner of the 2 BTC created that transaction.
You might challenge this, and say that the private key could be hacked. This is true, and people lose funds like this all the time. This is another "feature" of the blockchain: whoever controls the keys controls the funds. You may have heard a similar phrase: not your keys, not your coins. This is a big difference between blockchain and TradFi (traditional finance). If someone hacks into my bank account and sends all my money to someone else, I'll probably be able to get the money back. On the blockchain, if they have your keys, your money is gone.
But we still don't know where the private keys come from. One possibility is just generating a random private key. But that's not normally what happens in the blockchain space. Instead, we have...
Cryptographic hash functions
Another cryptographic primitive we use is a hash function. The idea of a hash function is to take an arbitrary amount of data and generate a fixed number of bytes. For example, SHA256 generates (surprise) a 256 bit value. The goals of hash functions are:
- Non-reversable: based on a hashed value, it should be impossible to determine the input data that led to it.
- Even distribution: there should be roughly an equal chance of getting any possible value from the hash function.
- Cascading: a small change in input should result in a large change in the output.
Hashes are used quite extensively in blockchains, such as "transaction hashes" and in the process of signing messages described above. Our use of hash functions in this document revolves around generating wallet addresses, discussed below.
Seed phrases
A seed phrase is usually a set of 12 or 24 English words, taken from a dictionary of 1,024 words available. Seed phrases are specified by BIP-39, if you want to look up more details. Using that set of words, you can generate a large number. This number can then be used to derive a private key, using...
Derivation paths
Another part of the BIP-39 standard are derivation paths. These look like m/44'/118'/0'/0/0. You can see more information on them in BIP-44. But the basic idea is that you can take that big number from the seed phrase and generate a large number of different private keys.
The different numbers in the derivation path can be used to indicate different coin types. The 118 above, for example, is the default coin type in the Cosmos ecosystem. 60 is used in the Ethereum space, by contrast. (And, since Injective follows Ethereum standards, it's used by Injective too.) For a full list, check SLIP-44.
Other numbers in that list can be used for deriving a wider range of wallets within the same coin type. In particular, the last 0 is typically called the index, and can be used for creating numbered accounts. Some common use cases:
- Having a single seed phrase for bots, but allowing the bots to manage multiple wallets.
- Using a single hardware wallet (like a Ledger) but managing multiple accounts.
bech32
bech32 is a standard for representing binary data. It's not worth going into the details of how bech32 works here, but the important point is that it provides for a human readable part (HRP) and a payload, and that the payload includes a checksum to avoid transcription errors. bech32 is used throughout the Cosmos ecosystem for encoding addresses. (It started in the Bitcoin world as SegWit addresses.)
You can use an online encoder to get a feel for this. As an example, if I specify an HRP of fpco and a payload of deadbeef, I get the wallet address fpco1m6kmamch637yv. Note that the 1 here represents the separation between the HRP and the encoded data.
The question is: what data should we use for the payload? The obvious answer there would be the public key. Unfortunately public keys are large, and embedding an entire public key in the wallet address wouldn't scale well.
Instead, we generate a wallet address by first hashing the public key, and then bech32 encoding it. Different ecosystems use different standards for this hashing, in particular Injective is different from the rest of Cosmos. You can check out the implementation in cosmos-rs for details.
Which leaves the final question...
Signatures
There's a problem with using hashed public keys for wallet addresses: we can't use them for signature validation! How can we be sure that a transaction originated from the right person if we can't validate the signature?
The answer is that, when you send a transaction, you have to not only include the wallet address (the bech32-encoded, hashed public key), but also the raw public key. Then the blockchain is responsible for confirming that the public key would result in the given wallet address, and that the attached signature matches that public key.
Playing with this
Everything described here is pretty high level and abstract, but hopefully enough to orient you. If you want to experiment, virtually every wallet software out there will generate seed phrases and wallet addresses for you. You can also do this with the cosmos CLI tool from cosmos-rs:
➜ cosmos wallet gen-wallet osmo
Mnemonic: buffalo comic shock alarm table urge huge crucial crystal february twice will path comfort afford differ come cage despair hawk must talk thing trumpet
Address: osmo14aqvzkjewk4gjp9uq02536v02695fpe4m5ukpt
➜ cosmos wallet change-address-type osmo14aqvzkjewk4gjp9uq02536v02695fpe4m5ukpt fakehrp
fakehrp14aqvzkjewk4gjp9uq02536v02695fpe426v9sc
➜ cosmos wallet change-address-type fakehrp14aqvzkjewk4gjp9uq02536v02695fpe426v9sc osmo
osmo14aqvzkjewk4gjp9uq02536v02695fpe4m5ukpt
Cosmos FAQ
What is an account sequence mismatch error?
Every active wallet on a Cosmos blockchain has an account number. This is given out the first time that wallet receives native coins on the chain. Every time you sign a transaction, you need to include both this account number and an account sequence number. This is a monotonically increasing nonce to avoid replay attacks and other issues. It's possible to get an account sequence mismatch error if the node you're talking to is expecting to see a different sequence number on a transaction. This can happen for multiple reasons, but the most common are:
- The node you're talking to has fallen out of sync with the rest of the chain, and therefore has an incorrect view of your next sequence number.
- There are two services/people/something using the same wallet at the same time.
- You have a code error where you're using the wrong sequence number.
- There's a fundamental bug in the node you're talking too.
A related error is about account not found. This occurs when a wallet doesn't have an account number yet, because no one has sent that wallet any native coins.
Near
Please consider this document as a draft. There will be other iterations, perhaps one written by you.
Near Protocol. In their website, Near is described as not just a blockchain protocol, but an operating system that offers a common layer for browsing and discovering open web experiences and is compatible with any blockchain. A few blockchains I know about are scalable by design, allowing a natural operability between actors involved in different node.
Blockchain Scaling Approaches
Interestingly Near is scalable by design. I suggest you to have a read at this document. The summary is that we could compare Near Shards to Ethereum with rollups, where each shard is similar to an optimistic rollup. While Ethereum is evolving in that direction, Near have been designed for scaling, and the smart contract infrastructure is conceived to allow transparently interoperability between shards. Actually taking a look at the concrete implementation it looks like it will not be 100% transparent, but perhaps in a new iteration of this document we'll be able to share some concrete experience. For now, lets analyze the lifecycle of a transaction in a shared context.
Transactions in a sharded context
The immediate outcome of the transaction execution is merely an acknowledgment indicating that the transaction will be executed on the blockchain. This internal execution request is known as a receipt. Conceptually, you can envision the receipt as an internal transaction that facilitates the transfer of information across different shards within the NEAR blockchain.
Tooling
Insturctions for installing the standard tolling for cli(near-cli) and development go to the smart contracts section
Test net transactions explorer: https://explorer.testnet.near.org/transactions/
External doc links
- Official Near docs
- Rust for Blockchain Application Development Chapter 8 talks about Near.
Mainnet, testnet, and local
I'll add some information here. When you install near-cli you may have a local blockchain, just like, lets say Ganache. More information will be added, let me try it first :)
Creating a Testnet account
After installing the local tooling provided by Near you may create a test net account:
# Replace <your-account-id.testnet> with a custom name
near create-account <your-account-id.testnet> --useFaucet
or if you want to specify custom parameters:
near account create-account sponsor-by-faucet-service <your-account-id.testnet> autogenerate-new-keypair save-to-keychain network-config testnet create
In both cases, if the account id is available, you'll get an output like this:
Your transaction:
signer_id: testnet
actions:
-- create account: vfp.testnet
-- add access key:
public key: ed25519:2nh8uWoxsHDj9hzUKfkabtL7YDSTBNPSTDPsaz8ELQoX
permission: FullAccess
▹▹▸▹▹ Creating a new account ...
▹▹▸▹▹ ↳ Receiving request via faucet service https://helper.nearprotocol.com/account
New account <vfp.testnet> created successfully.
The data for the access key is saved in the keychain
Transaction ID: GLDRf3R17GJmpfpaiioX4WXMS2FkV3jxSUVYQZ6wdQtx
To see the transaction in the transaction explorer, please open this url in your browser:
https://explorer.testnet.near.org/transactions/GLDRf3R17GJmpfpaiioX4WXMS2FkV3jxSUVYQZ6wdQtx
Here is your console command if you need to script it or re-run:
near account create-account sponsor-by-faucet-service vfp.testnet autogenerate-new-keypair save-to-keychain network-config testnet create
To understand what a custom_name is, just keep on reading.
Accounts
-
Account ID. It is a unique identifier associated with each account on the NEAR blockchain. Account IDs are human-readable and are typically represented as strings. For instance, alice.near or my_dapp.near could be valid account IDs. Account IDs act as the addresses to which funds can be sent and provide access to the associated ac- count’s data and smart contract functionality.
-
Implicit accounts. are created automatically as part of a transaction. When a transaction is sent from a particular account ID that does not exist, NEAR automatically creates an implicit account with that ID. Implicit accounts are useful for one-time interactions or temporary data storage within a transaction.
#![allow(unused)] fn main() { // Create an implicit account in a transaction #[near_bindgen] pub fn create_implicit_account(&mut self, account_id: String) { let account_id: ValidAccountId = account_id.try_into().unwrap(); env::log(format!("Creating implicit account: {}", account_id).as_bytes()); // Perform actions with the implicit account // ... } }
- Named accounts
Accounts with persistent state and private keys. They can receive funds, store data, and interact with other smart contracts. Named accounts provide a more permanent identity for users or dApps within the NEAR ecosystem. Developers can create and manage named accounts programmatically using NEAR’s SDKs.
Account creation and contract deployment
#![allow(unused)] fn main() { const NEAR_RPC_URL: &str = "https://rpc.mainnet.near.org";// Connect to the NEAR network let near = near_sdk::connect::connect(near_sdk::Config { network_id: "mainnet".to_string(), node_url: NEAR_RPC_URL.to_string(), }); // Create a new account using async/await and Promises API async fn create_and_deploy() { let new_account = near.create_account("new_account").await.unwrap(); // Load contract code let contract_code = include_bytes!("path/to/contract.wasm"); // Deploy a contract to the new account using Promises API new_account.deploy_contract(contract_code).await.unwrap(); } // You can call `create_and_deploy` function in an async context }
Addresses
In Near addresses are cryptographic hashes generated from account IDs, providing a secure and tamper-resistant way to identify accounts.
Transaction routing
The NEAR network utilizes addresses to determine the appropriate shard, or a subset of nodes, responsible for processing the transaction, enabling efficient and scalable transaction processing.
Secure transactions and interactions
Access Keys
Full access keys.
Grant complete control over an account, allowing the holder to per- form any operation on behalf of the account. These keys are typically used by account owners or trusted entities requiring full control over the associated account.
Function call keys(limited access keys).
Grant permissions for specific actions or function calls within a smart contract. This key type is commonly used for the delegation of specific tasks to trusted third-party contracts or for executing specific actions without granting full account access.
The account owner may designate a specific access key to manage the account's resources. A locked account requires the usage of a specific access key for any transaction or operation to be executed successfully. This provides an additional layer of security because even if an attacker gains access to other access keys associated with the account, they cannot perform any operation without the designated access key.
#![allow(unused)] fn main() { // Create a full access key #[near_bindgen] pub fn create_full_access_key(&mut self, public_key: PublicKey) { self.env().key_create( public_key, &access_key::AccessKey { nonce: 0, permission: access_key::Permission::FullAccess, }, ); } // Create a function call key #[near_bindgen] pub fn create_function_call_key(&mut self, public_key: PublicKey) { self.env().key_create( public_key, &access_key::AccessKey { nonce: 0, permission: access_key::Permission::FunctionCall { allowance: access_key::FunctionCallPermission { allowance: 10.into(), // Maximum number of function call allowances receiver_id: "receiver_account".to_string(), method_names: vec!["allowed_method".to_string()], }, }, }, ); } }
Token transfer
When transferring tokens, it is essential to include checks and validations to prevent accidental loss. For instance, developers can verify that the recipient account exists and is valid before initiating the transfer. Here’s an example in Rust using the NEAR SDK:
#![allow(unused)] fn main() { pub fn transfer_tokens2(self, recipient: near_sdk::AccountId, amount: near_sdk::Balance) { assert!( env::is_valid_account_id(&recipient.as_bytes()), "Invalid recipient account" ); let sender_balance = account_balance(); assert!(sender_balance >= amount, "Insufficient balance"); // Perform the token tra Promise::new(recipient).transfer(amount); } }
Smart contracts
I found it quite useful to explore the partially interactive Anatomy of a Contract
Introduction Official Near Introduction
Install dependencies
# Install Rust: https://www.rust-lang.org/tools/install
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Contracts will be compiled to wasm, so we need to add the wasm target
rustup target add wasm32-unknown-unknown
# Install the NEAR CLI to deploy and interact with the contract
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/near/near-cli-rs/releases/latest/download/near-cli-rs-installer.sh | sh
# Install cargo near to help building the contract
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/near/cargo-near/releases/latest/download/cargo-near-installer.sh | sh
Create a smart contract default skeleton:
cargo near new hello-near
After ensuring you have an account you may build the smart contract:
cargo near build
Your wasm will be generated which will them be deployable for example to the test-net:
near contract deploy <created-account> use-file ./target/near/hello_near.wasm without-init-call network-config testnet sign-with-keychain send
Sample output:
near contract deploy vfp.testnet use-file ./target/near/hello_near.wasm without-init-call network-config testnet sign-with-keychain send
Unsigned transaction:
signer_id: vfp.testnet
receiver_id: vfp.testnet
actions:
-- deploy contract 3v7ievz4W6UKBwWubLvyJbSmsSEHnrnXJhRPqJNBQNaK
▹▸▹▹▹ Signing the transaction with a key saved in the secure keychain ...
Your transaction was signed successfully.
Public key: ed25519:2nh8uWoxsHDj9hzUKfkabtL7YDSTBNPSTDPsaz8ELQoX
Signature: ed25519:48m8ZqdQ3xjFkAogqQoameH6zgVC64kcKkh56PjidBe1B2AW6UUF8cND3QkvZu1SXqE18KzPYyYYxczih5ZjXZdD
▹▹▸▹▹ Sending transaction ...
--- Logs ---------------------------
Logs [vfp.testnet]: No logs
--- Result -------------------------
Empty result
------------------------------------
Contract code has been successfully deployed.
Gas burned: 7.4 Tgas
Transaction fee: 0.0007339617339116 NEAR
Transaction ID: C8anQCbcTMwsHyneBQhmTnrLeR4EcNuqkXt8YaDo9nGP
To see the transaction in the transaction explorer, please open this url in your browser:
https://explorer.testnet.near.org/transactions/C8anQCbcTMwsHyneBQhmTnrLeR4EcNuqkXt8YaDo9nGP
Here is your console command if you need to script it or re-run:
near contract deploy vfp.testnet use-file ./target/near/hello_near.wasm without-init-call network-config testnet sign-with-keychain send
Interacting with smart contracts
From the CLI
Querying the states:
near view <created-account> <function-name>
Updating status:
near call <created-account> set_greeting '{"greeting": "hola"}' -accountId <created-account>
// Instantiate a contract object let contract = Contract::new(account_id, contract_id, signer); // Call a method on the contract contract.call_method("method_name", json!({ "param": "value" })); // Get contract state let state: ContractState = contract.view_method("get_state", json!({}));
Handling tokens
// Transfer tokens from one account to another let sender = near.get_account("sender_account"); let recipient = near.get_account("recipient_account"); sender.transfer(&recipient, 100); // Check token balance let balance = recipient.get_balance();
Storage
In the context of smart contracts it is always a good idea to keep the storage requirements low. In that case you are invited to use the Native collections provided by the programming language, these will be serialized into a single value and stored together. If you do not have other alternative than storing big amount of data, then you better take a look at the SDK Collections provided by the NEAR SDk.
There is a very useful table in the Near documentation with the complexity for the typical operations of each of the provided structures. https://docs.near.org/build/smart-contracts/anatomy/collections#complexity
Vector.
Dynamic arrays
#![allow(unused)] fn main() { // Declare a vector of u64 elements let mut my_vector: Vec<u64> = Vec::new(); // Add elements to the vector my_vector.push(10); my_vector.push(20); my_vector.push(30); // Access elements in the vector let second_element = my_vector[1]; }
LookupSet
An unordered collection of unique elements. Efficient membership checks
#![allow(unused)] fn main() { // Declare a LookupSet of string elements let mut my_lookupset: LookupSet<String> = LookupSet::new(); // Add elements to the LookupSet my_lookupset.insert("apple".to_string()); my_lookupset.insert("banana".to_string()); // Check membership let contains_apple = my_lookupset.contains("apple".to_string()); }
UnorderedMap
A key-value data structure that does not guarantee any specific order of elements.
#![allow(unused)] fn main() { // Declare an UnorderedMap with u32 keys and string values let mut my_unorderedmap: UnorderedMap<u32, String> = UnorderedMap::new(); // Add key-value pairs to the UnorderedMap my_unorderedmap.insert(1, "value1".to_string()); my_unorderedmap.insert(2, "value2".to_string()); // Iterate over the key-value pairs for (key, value) in my_unorderedmap.iter() { // Process each key-value pair } ## UnorderedSets // Declare an UnorderedSet of u32 elements let mut my_unorderedset: UnorderedSet<u32> = UnorderedSet::new(); // Add elements to the UnorderedSet my_unorderedset.insert(1); my_unorderedset.insert(2);// Iterate over the elements for element in my_unorderedset.iter() { // Process each element } ## LookupMap // Declare a LookupMap with string keys and u64 values let mut my_lookupmap: LookupMap<String, u64> = LookupMap::new(); // Add key-value pairs to the LookupMap my_lookupmap.insert("key1".to_string(), 10); my_lookupmap.insert("key2".to_string(), 20); // Access values based on keys let value = my_lookupmap.get("key1".to_string()); ## TreeMap // Declare a TreeMap with u64 keys and string values let mut my_treemap: TreeMap<u64, String> = TreeMap::new(); // Add key-value pairs to the TreeMap my_treemap.insert(3, "value3".to_string()); my_treemap.insert(1, "value1".to_string()); my_treemap.insert(2, "value2".to_string()); // Iterate over the key-value pairs in sorted order for (key, value) in my_treemap.iter() { // Process each key-value pair } }
Smart Contracts Unit Testing
Unit tests are useful to check for code integrity, and detect basic errors on isolated methods. However, since unit tests do not run on a blockchain, there are many things which they cannot detect. Unit tests are not suitable for:
- Testing gas and storage usage
- Testing transfers
- Testing cross-contract calls
- Testing complex interactions, i.e. multiple users depositing money on the contract
You can also check official NEAR Unit test documentation for more details
For all these cases it is necessary to complement unit tests with integration tests.
If you want to execute all of your unit tests you can run them with:
cargo test
Or if you prefer to execute just 1 test you can do it using the following command:
cargo test --lib -- <module-name>::<test-name> --exact --show-output
Replacing <module-name> and <test-name> with the corresponding values.
For example, based on the Hello Contract example, these are the tests available:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn get_default_greeting() { let contract = Contract::default(); // this test did not call set_greeting so should return the default "Hello" greeting assert_eq!(contract.get_greeting(), "Hello"); } #[test] fn set_then_get_greeting() { let mut contract = Contract::default(); contract.set_greeting("howdy".to_string()); assert_eq!(contract.get_greeting(), "howdy"); } } }
The tests module is called tests and you can choose to execute any of get_default_greeting or set_then_get_greeting test.
Run only get_default_greeting test:
cargo test --lib -- tests::get_default_greeting --exact --show-output
Output:
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.57s
Running unittests src/lib.rs (target/debug/deps/near_smart_contract-fdb69a096598ebaf)
running 1 test
test tests::get_default_greeting ... ok
successes:
successes:
tests::get_default_greeting
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s
Run only set_then_get_greeting test:
cargo test --lib -- tests::set_then_get_greeting --exact --show-output
Output:
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.19s
Running unittests src/lib.rs (target/debug/deps/near_smart_contract-fdb69a096598ebaf)
running 1 test
test tests::set_then_get_greeting ... ok
successes:
---- tests::set_then_get_greeting stdout ----
Saving greeting: howdy
successes:
tests::set_then_get_greeting
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.01s
Smart Contracts Integration Testing
In NEAR, integration tests are implemented using a framework called Workspaces. It comes in two flavors, Rust and Typescript. However, for this document purposes we'll focus on the Rust flavor for obvious reasons.
You can also check official NEAR Unit test documentation for more details
Adding Near Workspaces to your project
Near Workspaces for Rust repo: near-workspaces-rs
To add Workspaces to your project just add it to your Cargo.toml file in the [dev-dependencies] section, don't add it to the dependencies section since Workspaces does not currently compile to WASM.
[dev-dependencies]
near-sdk = { version = "5.7", features = ["unit-testing"] }
near-workspaces = { version = "0.16", features = ["unstable"] }
tokio = { version = "1.12.0", features = ["full"] }
serde_json = "1"
Testing using Workspaces
The following code snippet is based on the Hello Contract example, this is a Rust example since there is no example of instantiating a contract from Scratch on Near Workspaces documentation using Rust.
#![allow(unused)] fn main() { #[tokio::test] async fn test_hello_contract() -> anyhow::Result<()> { let worker = near_workspaces::sandbox().await?; let contract_wasm: Vec<u8> = near_workspaces::compile_project("./").await?; let contract = worker.dev_deploy(&contract_wasm).await?; let account = worker.dev_create_account().await?; let greeting: serde_json::Value = contract .view("get_greeting") .await? .json()?; assert_eq!(greeting, "Hello"); let outcome = account .call(contract.id(), "set_greeting") .args_json(json!({ "greeting": "Hola Mundo", })) .transact() .await?; println!("set_greeting outcome: {:#?}", outcome); let greeting: serde_json::Value = contract .view("get_greeting") .await? .json()?; assert_eq!(greeting, "Hola Mundo"); Ok(()) } }
Code Breakdown
Test Annotation
#![allow(unused)] fn main() { #[tokio::test] async fn test_hello_contract() -> anyhow::Result<()> { }
- #[tokio::test]: Marks the function as an asynchronous test using the tokio runtime.
- async fn test_hello_contract() -> anyhow::Result<()>: Defines an asynchronous test function that returns a Result type from the anyhow crate for error handling.
Initialize Sandbox Environment
#![allow(unused)] fn main() { let worker = near_workspaces::sandbox().await?; }
- Initializes the NEAR Workspaces sandbox environment, which simulates the NEAR blockchain for testing.
Compile and Deploy Contract
#![allow(unused)] fn main() { let contract_wasm: Vec<u8> = near_workspaces::compile_project("./").await?; let contract = worker.dev_deploy(&contract_wasm).await?; }
- Compiles the smart contract located in the current directory ("./") and stores the compiled Wasm bytecode in
contract_wasm. - Deploys the compiled contract to the sandbox environment and stores the deployed contract instance in
contract.
Create a Test Account
#![allow(unused)] fn main() { let account = worker.dev_create_account().await?; }
- Creates a new test account in the sandbox environment.
Call get_greeting Method
#![allow(unused)] fn main() { let greeting: serde_json::Value = contract .view("get_greeting") .await? .json()?; assert_eq!(greeting, "Hello"); }
- Calls the
get_greetingview method on the deployed contract. - Parses the returned JSON value and stores it in greeting.
- Asserts that the initial greeting is "Hello".
Call set_greeting Method
#![allow(unused)] fn main() { let outcome = account .call(contract.id(), "set_greeting") .args_json(json!({ "greeting": "Hola Mundo", })) .transact() .await?; println!("set_greeting outcome: {:#?}", outcome); }
- Calls the
set_greetingmethod on the contract, passing the new greeting "Hola Mundo" as an argument. - Executes the transaction and stores the outcome.
- Prints the outcome of the
set_greetingcall.
Verify Updated Greeting
#![allow(unused)] fn main() { let greeting: serde_json::Value = contract .view("get_greeting") .await? .json()?; assert_eq!(greeting, "Hola Mundo"); }
- Calls the get_greeting method again to verify the updated greeting.
- Asserts that the greeting has been updated to "Hola Mundo".