Launching and Connecting to a Free AWS EC2 Instance for NASA Earthdata Cloud Access
This guide shows you how to:
- Launch a free Amazon EC2 virtual server in the cloud (Free Tier eligible)
- Securely connect to it from your computer
- Prepare the instance for working with NSIDC DAAC Earthdata Cloud data
- Transfer files between your computer and the instance
No prior cloud experience needed — this walkthrough is designed for first-time users!
Prerequisites
- An AWS account
- A NASA Earthdata Login account
- A terminal application:
- macOS/Linux → Terminal
- Windows → Git Bash or WSL
Part 1: Launch a Free EC2 Instance
1. Log in to the AWS Console
- Go to AWS Management Console and sign in.
2. Set the Correct AWS Region
- In the Region selector (upper-right corner), choose: United States (Oregon) —
us-west-2
- Ensures all resources are in the same region and avoids unexpected cross-region transfer costs.
3. Open the EC2 Dashboard
Option 1: Search Bar
- Type EC2 in the search bar at the top → select EC2 under Services.
Option 2: All Services Menu
- Click All services in the left navigation menu → Under the Compute category, select EC2.
4. Launch a New Instance
- Click Launch instance (or go to Instances in the left menu — the button is in the upper-right corner)
- Use the following settings:
Section | Choose… |
---|---|
Name and tags | my_earthdata_ec2 (or your preferred name) |
Application and OS Images | Ubuntu Server 24.04 LTS (Free Tier Eligible) Set Architecture to 64-bit (x86) |
Instance type | t3.micro (Free Tier Eligible) |
Key pair | Create new → Name it (e.g., my_earthdata_key ) → RSA → .pem → Download |
Network settings | Create security group → allow SSH traffic from My IP |
Storage | 15 GiB or more |
Optional Recommendations
- Add a security group rule for port 8888 if you plan to run JupyterLab. Guide here: https://nsidc.org/data/user-resources/help-center/accessing-nasa-earthdata-cloud-jupyterlab-ec2
- Increase storage if working with large datasets (may incur costs).
- Explore other AMIs or instance types (check Free Tier eligibility to avoid charges).
5. Confirm the Instance is Running
- Click Launch instance
- Go to the Instances panel (left navigation → Instances)
- Confirm the Instance state column shows Running before continuing
Stop or Terminate Your Instance
Action | What It Does | When to Use |
---|---|---|
Stop Instance | Shuts down the instance but keeps settings, storage, and data. You can restart it later. | If you’ll use it again soon. |
Terminate Instance | Permanently deletes the instance and its storage (unless you chose “Keep”) | If you’re completely done with this instance. |
To Stop or Terminate:
1. In the AWS Management Console, go to EC2 → Instances.
2. Select your instance.
3. Click the Instance state dropdown.
4. Choose Stop instance or Terminate instance.
5. Confirm your choice.
Part 2: Connect to Your EC2 Instance
1. Select Your Instance
- On the Instances page (where you landed after launching), check the box next to your new instance (e.g.,
my_earthdata_ec2
)
2. Open the Connect Panel
- At the top right of the Instances table, click Connect (located to the left of Instance state and Actions).
- Select the SSH client tab. This shows the SSH command you will use to connect.
3. Open a Terminal on Your Computer
- macOS/Linux: Open Terminal
- Windows: Open Git Bash or WSL
4. Prepare Your .pem
Key File
Your .pem
file is the secure key that lets you log in. Its permissions must be restricted.
macOS/Linux
Navigate to the folder containing your .pem
key life and run:
chmod 400 "<your_key>.pem"
This ensures only you can read the key, which SSH requires.
Windows
1. Locate your .pem file in File Explorer.
2. Right-click → Properties → Security → Advanced
3. Disable inheritance:
Click Disable inheritance
Select Convert inherited permissions into explicit permissions.
4. Remove unnecessary users/groups:
Select SYSTEM, Administrators, Authenticated Users, and Everyone (if present), and click Remove
5. Keep only your primary user account and ensure it has Full control.
6. Click OK to apply changes.
Your .pem
file is now secured, equivalent to chmod 400
.
5. Copy the SSH Command
From the SSH client tab, copy the example command. It usually looks like this:
ssh -i "<your_key>.pem" ubuntu@<your_instance_public_dns>
- Replace
<your_key>
with your.pem
key filename - Replace
<your_instance_public_dns>
with your instance’s Public IPv4 DNS (found in the Connect panel)
6. Connect to Your Instance
- Paste the command into your terminal and press Return.
- If prompted with
Are you sure you want to continue connecting?
, typeyes
and press Return - If successful, your prompt will change to show the remote EC2 instance (e.g.,
ubuntu@ip-172-31-xx-xx:~$
)
Part 3: Prepare Your EC2 Instance for Cloud Access
3.1 (Optional) Install AWS CLI
You only need this step if you want to download cloud-hosted data directly from S3. Otherwise, skip to 3.2.
1. Download the AWS CLI v2 installer (Linux x86_64):
curl "<https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip>" -o "awscliv2.zip"
2. Unzip the installer (install unzip if needed):
sudo apt update && sudo apt install -y unzip
unzip awscliv2.zip
3. Run the installer:
sudo ./aws/install
4. Verify installation:
aws --version
You should see output like: aws-cli/2.x.x Python/3.x.x Linux/x86_64
3.2 Set Up NASA Earthdata Access
You’ll need to configure your EC2 instance so it can authenticate with NASA Earthdata services.
1. Create a .netrc
file with your Earthdata Login
echo "machine urs.earthdata.nasa.gov login your_username password your_password" >> ~/.netrc
chmod 600 ~/.netrc
- Replace
your_username
andyour_password
with your NASA Earthdata Login credentials. - The
.netrc
file stores your login information securely so that Earthdata services can verify your requests.
2. (Optional) Configure AWS temporary credentials
If you installed AWS CLI, export your temporary AWS S3 credentials:
export AWS_ACCESS_KEY_ID=<your_accessKeyId>
export AWS_SECRET_ACCESS_KEY=<your_secretAccessKey>
export AWS_SESSION_TOKEN=<your_sessionToken>
- Replace
<your_accessKeyId>
,<your_secretAccessKey>
, and<your_sessionToken>
with your actual AWS credentials. - Get your temporary credentials here: NSIDC S3 credentials
- These credentials expire after a few hours, so you’ll need to refresh them for each new session.
- Confirm your AWS credentials (optional, if using AWS CLI):
aws sts get-caller-identity
. This will return your AWS account identity if the credentials are valid.
Part 4: Transfer Files To and From Your EC2 Instance
You can move files between your local computer and your EC2 instance using scp
(secure copy) command.
4.1 Upload a File to EC2
To send a file (e.g., a text file containing S3 URLs) from your computer to your EC2 instance:
scp -i <your-key>.pem local_file.txt ubuntu@<your-public-dns>:/home/ubuntu/
- Replace
<your-key>.pem
with your SSH key file. - Replace
local_file.txt
with the file you want to upload. - Replace
<your-public-dns>
with your instance’s Public DNS (found in the Connect panel in Part 2).
4.2 Download Files from EC2
To copy files from EC2 back to your local machine:
Download all files in /home/ubuntu/
scp -i <your-key.pem> -r ubuntu@<your-public-ip>:/home/ubuntu/ .
-r
= recursive (downloads folders and files inside)..
= your current local directory.
Download only .h5 files
scp -i my-key.pem "ubuntu@<your-public-ip>:/home/ubuntu/*.h5" .
- Quotes prevent your local shell from expanding
*.h5
before sending the command. - This ensures the wildcard expansion happens on the EC2 instance rather than on your local machine.
- You can use this wildcard technique to target any specific file extensions you want to download, not just .h5 files.
Part 5: (Optional) Download Data from S3 on EC2
If you have AWS CLI installed and configured on your EC2 instance, you can download multiple S3 files efficiently.
1. Upload a text file containing a list of S3 URLs to your EC2 instance (see Part 4.1)
2. On the EC2 terminal, run the following script to download each file:
while read -r s3url; do
echo "Downloading $s3url"
aws s3 cp "$s3url" . # Assumes credentials are configured and valid
done < S3-URLs.txt
- Replace
S3-URLs.txt
with the name of your text file. - The script will download each file listed in the text file to the current directory on your EC2 instance.
Part 6: Beginner-Friendly Ideas
Once your EC2 instance is ready, try the following to become comfortable with your cloud environment:
- Explore Linux commands:
ls
,cd
,mkdir
,rm
- Test file transfers with
scp
- Edit files directly on EC2 with
nano
orvim
- Run Python scripts or Jupyter notebooks
Final Thoughts
You now have a fully functional, Free Tier EC2 environment ready for cloud-based Earth science workflows. This setup gives you:
- Secure access to your EC2 instance via SSH
- File transfer capabilities between your computer and EC2
- Access to NASA Earthdata Cloud and S3 data
- A reproducible workflow for Python scripts or Jupyter notebooks
With this foundation, you can safely explore NASA Earthdata collections and scale up worflows if needed — always checking Free Tier eligibility to avoid unexpected costs.