Skip to main content

AWS Certified Machine Learning Engineer — 72 Hrs Study Guide

AWS Certified Machine Learning Engineer — 72 Hrs Study Guide

Complete 72-Hour Intensive Study Guide with Visual Diagrams & Cheat Sheet


⚡ LEARN IN 72 HOURS: This guide contains important details which you need to know for the AWS Certified Machine Learning Engineer Exam. Follow the intensive study plan, memorize the mnemonics, use the visual diagrams for quick reference, and review the cheat sheet before the exam. You've got this!


📊 Domain Breakdown & Weightings

Domain Weight Focus Areas
Domain 1: Data Preparation 28% Storage, ETL, Feature Engineering, Data Quality
Domain 2: Model Development 26% Algorithms, Training, Tuning, Evaluation
Domain 3: Deployment 22% Endpoints, MLOps, Pipelines, Orchestration
Domain 4: Monitoring/Security 24% Model Monitor, CloudWatch, IAM, KMS, VPC

🗓️ 72-Hour Intensive Study Schedule

Day 1 (24 Hours)

Hours 1-6: Domain 1 — Data Preparation (Morning Session)

⏰ Hours 1-2: Storage & Data Formats

  • Memorize: S-KEFS-R (Storage), PAJRC (Data Formats)
  • Learn: When to use S3 vs Kinesis vs Redshift
  • Focus: Parquet vs RecordIO vs CSV - exam loves this!
  • Practice: Create S3 bucket, upload data in different formats

⏰ Hours 3-4: ETL & Processing Services

  • Memorize: GEEKS-DW (ETL Services)
  • Learn: Glue vs EMR vs Data Wrangler decision tree
  • Hands-on: Create Glue job, use Data Wrangler in SageMaker
  • Key concept: When serverless (Glue) vs managed clusters (EMR)

⏰ Hours 5-6: Feature Engineering

  • Memorize: BITES-NO (Feature Engineering), MOUD (Data Quality)
  • Learn: SMOTE, imputation methods (mean/KNN/MICE)
  • Critical: Handling imbalanced classes - VERY common on exam!
  • Practice: Apply transformations in Data Wrangler

Hours 7-12: Domain 2 — Model Development (Afternoon Session)

⏰ Hours 7-9: SageMaker Algorithms

  • Memorize: BLIINKS-FPXDR (ALL algorithms)
  • CRITICAL: XGBoost, Linear Learner, DeepAR, RCF - most tested!
  • Learn: Algorithm selection decision tree
  • Practice: Train XGBoost model on sample data

⏰ Hours 10-11: Hyperparameter Tuning

  • Memorize: BRAGS (Tuning strategies)
  • Focus: Bayesian vs Random vs Grid - when to use each
  • XGBoost params: num_round, eta, max_depth (memorize these!)
  • Practice: Run hyperparameter tuning job

⏰ Hour 12: Evaluation Metrics

  • Memorize: FARM-CAR (Classification + Regression metrics)
  • Learn: F1 vs Accuracy vs ROC-AUC - when to use which
  • Key: Imbalanced classes → F1 score, not accuracy!

Hours 13-18: Training & Practice (Evening Session)

⏰ Hours 13-14: Training Modes & Infrastructure

  • Memorize: PFIS (Training modes)
  • CRITICAL: Pipe Mode vs File Mode - VERY common question!
  • Learn: Spot training, instance types (ml.p3, ml.c5)
  • Cost optimization: Pipe + Spot = huge savings

⏰ Hours 15-18: Practice Questions

  • Take practice exam: 30-40 questions on Domains 1-2
  • Review wrong answers: Understand WHY you got them wrong
  • Flashcards: Create for concepts you struggle with
  • Write out mnemonics: 10 times each from memory

Hours 19-24: Review & Sleep (Night Session)

  • Recite all mnemonics learned today
  • Create mind map connecting Domain 1 → Domain 2
  • Review visual diagrams (print them out!)
  • SLEEP (4-5 hours minimum) - Your brain needs sleep to consolidate learning

Day 2 (24 Hours)

Hours 25-30: Domain 3 — Deployment (Morning Session)

⏰ Hours 25-27: Deployment Options

  • Memorize: REBAS (Deployment types)
  • SUPER CRITICAL: Real-time vs Serverless vs Batch vs Async
  • Learn: Deployment decision tree - memorize this cold!
  • Practice: Deploy model as real-time endpoint
  • Key trap: "Real-time" doesn't always mean Real-time Endpoint!

⏰ Hours 28-30: MLOps & Orchestration

  • Memorize: SPEC-SAM (MLOps services)
  • Focus: SageMaker Pipelines (preferred solution!)
  • Learn: Step Functions vs Airflow vs Pipelines
  • Practice: Create simple SageMaker Pipeline

Hours 31-36: Domain 3 Continued & Domain 4 Start (Afternoon Session)

⏰ Hours 31-32: Endpoint Optimization

  • Memorize: MASS-EI (Endpoint optimization)
  • Learn: Multi-Model Endpoints, Auto-scaling, Shadow testing
  • Cost focus: Inferentia, Elastic Inference savings
  • Practice: Configure auto-scaling for endpoint

⏰ Hours 33-36: Domain 4 — Monitoring

  • Memorize: CM-TAXI (Monitoring services)
  • CRITICAL: Model Monitor - 4 drift types (memorize all!)
  • Learn: CloudWatch metrics for SageMaker
  • Practice: Set up Model Monitor for drift detection

Hours 37-45: Security & Practice (Evening Session)

⏰ Hours 37-39: Security

  • Memorize: I-MAKE-VOWS (Security services), VINE (SageMaker security)
  • Focus: IAM roles, VPC mode, KMS encryption
  • Learn: Encryption at rest vs in transit
  • Common scenario: "Most secure option" = VPC + KMS + IAM

⏰ Hours 40-45: Full Practice Exam

  • Take full practice exam: 65 questions, timed (170 min)
  • Simulate real conditions: No breaks, no phone
  • Target score: 75%+ to feel confident
  • Review ALL answers: Right and wrong, understand concepts

Hours 46-53: Intensive Review & Sleep

  • Go through ALL mnemonics - write them out 10x each
  • Review all visual diagrams and decision trees
  • Identify weak areas from practice exam
  • SLEEP (4-5 hours) - Critical for memory consolidation

Day 3 (24 Hours) — Exam Day

Hours 54-62: Final Review & Preparation

⏰ Hours 54-56: Cheat Sheet Review

  • Print cheat sheet: Review the entire cheat sheet section
  • Memorize: All quick decision rules
  • Focus: Common traps section - don't fall for these!
  • Write down: Master mnemonics on paper/whiteboard

⏰ Hours 57-59: Final Practice Test

  • Take another full practice exam (65 questions)
  • Time yourself strictly
  • Target: 80%+ correct
  • Quick review of wrong answers only

⏰ Hours 60-62: Light Review & Pre-Exam

  • NO new information - just review
  • Go through all visual diagrams one more time
  • Recite all mnemonics out loud
  • Relax, breathe, hydrate

🎮 The Master Framework: "ML-PIPE-DDMS"

"Build your ML-PIPE and remember DDMS!"

ML-PIPE = The ML Engineering Workflow:

  1. Model Development
  2. Load & Prepare Data (Data Preparation)
  3. Push to Production (Deployment)
  4. Inspect & Protect (Monitoring & Security)
  5. Pipelines (Orchestration)
  6. Evaluate Performance

DDMS = The 4 Critical Focus Areas:

  1. Data (28% of exam)
  2. Development (26% of exam)
  3. Monitoring (24% of exam)
  4. Shipping Code (Deployment 22% of exam)

🗄️ DOMAIN 1: Data Preparation for ML (28%)

⚡ HIGHEST WEIGHT: This domain is 28% of your exam - master it!

📦 Storage Services: "S-KEFS-R"

Remember all AWS storage options for ML data

Think: "Safeguard Key Engineering Features on Secure Resources"

  1. S3 - Object storage for data lakes
  2. Kinesis - Real-time streaming data
  3. EBS - Block storage for EC2/EMR
  4. FSx - High-performance file systems (Lustre for ML)
  5. Sagemaker Feature Store - Feature management
  6. Redshift - Data warehouse for analytics

💡 Memory Anchor

"My ML project needs S3 buckets, Kinesis streams, EBS volumes, FSx for HPC, SageMaker Feature Store, and Redshift for queries!"

📊 Data Formats: "PAJRC"

The 5 essential data formats for ML

Think: "Please Always Jot Record Correctly"

  1. Parquet - Columnar format (best for analytics)
  2. Avro - Binary format with schema
  3. JSON - Semi-structured text
  4. RecordIO-Protobuf - SageMaker's preferred format
  5. CSV - Simple tabular data

🎯 Exam Tip

Parquet = Analytics (columnar, compressed)
RecordIO = SageMaker training (pipe mode)
Avro = Streaming with schema evolution

⚙️ ETL & Processing: "GEEKS-DW"

Remember all data transformation services

Think: "GEEKS use Data Wrangling"

  1. Glue - Serverless ETL service
  2. EMR - Managed Hadoop/Spark clusters
  3. EMR Serverless - Auto-scaling Spark/Hive
  4. Kinesis Data Firehose - Stream ETL
  5. SageMaker Processing - ML-specific processing
  6. Data Wrangler - Visual data prep
  7. Wrangler (included above)

🔥 Hot Exam Topic

Glue = Serverless, cost-effective
EMR = Custom code, complex processing
Data Wrangler = Visual, no code, 300+ transforms

🔧 Feature Engineering: "BITES-NO"

Master all feature engineering techniques

Think: "Feature engineering BITES, Need Optimization"

  1. Binning - Group continuous values into buckets
  2. Imputation - Handle missing data (mean/median/KNN/MICE)
  3. Transforming - Log, sqrt, polynomial transforms
  4. Encoding - One-hot, label, target encoding
  5. Scaling - Normalization, standardization (MinMaxScaler, StandardScaler)
  6. Normalization - Make features comparable
  7. Outlier handling - Remove or cap extreme values

⚡ Quick Reference

Imputation Methods:
• Mean/Median = Simple, fast
• KNN = Better accuracy, slower
• MICE = Most advanced, iterative

Unbalanced Data:
• SMOTE = Synthetic minority oversampling
• Random Oversampling = Duplicate minority
• Undersampling = Remove majority

🎯 Data Quality Issues: "MOUD"

Remember the 4 main data quality challenges

Think: "Get the MOUD (mood) of your data right!"

  1. Missing values - Impute or drop
  2. Outliers - Detect & handle (>3σ from mean)
  3. Unbalanced classes - SMOTE, over/undersampling
  4. Duplicate records - Remove or aggregate

🤖 DOMAIN 2: ML Model Development (26%)

🧠 SageMaker Algorithms: "BLIINKS-FPXDR"

Master the 11 most important SageMaker algorithms

Think: "BLIINKS before making FP (false positive) XDR (extreme detection rate)"

Supervised Learning:

  1. BlazeText - Text classification, word2vec
  2. Linear Learner - Classification/Regression
  3. Image Classification - Computer vision
  4. IP Insights - Anomaly detection for IPs
  5. Neural Topic Model (NTM) - Topic discovery
  6. KNN - Classification/Regression
  7. Sequence2Sequence (Seq2Seq) - Translation

Unsupervised Learning:

  1. Factorization Machines - Recommendation
  2. PCA - Dimensionality reduction
  3. XGBoost - Gradient boosting (most popular!)
  4. DeepAR - Time series forecasting
  5. Random Cut Forest (RCF) - Anomaly detection

🎯 Algorithm Selection Guide

Classification/Regression: Linear Learner, XGBoost, KNN
Image Tasks: Image Classification, Object Detection
Text Tasks: BlazingText, Seq2Seq, NTM
Anomaly Detection: Random Cut Forest, IP Insights
Time Series: DeepAR
Recommendations: Factorization Machines

⚙️ Hyperparameter Tuning: "BRAGS"

Remember tuning strategies

Think: "Good tuning BRAGS about results"

  1. Bayesian Optimization - Smart search (SageMaker default)
  2. Random Search - Random combinations
  3. Automatic Model Tuning (AMT) - SageMaker's service
  4. Grid Search - Exhaustive search
  5. Stochastic (Early Stopping) - Stop poor performers

💡 Exam Tip

Bayesian = Most efficient (SageMaker recommended)
Random = Better than grid, less expensive
Grid = Exhaustive, expensive, thorough
Early Stopping = Save cost, stop bad runs early

📊 Evaluation Metrics: "FARM-CAR"

Remember classification and regression metrics

Think: "Evaluate models on a FARM using a CAR"

FARM = Classification Metrics:

  1. F1 Score - Harmonic mean of precision & recall
  2. Accuracy - Correct predictions / Total predictions
  3. ROC-AUC - Area under ROC curve
  4. Matrix (Confusion) - TP, TN, FP, FN breakdown

CAR = Regression Metrics:

  1. Coefficient of Determination (R²) - Variance explained
  2. Absolute Error (MAE) - Mean Absolute Error
  3. RMSE - Root Mean Squared Error

🎯 When to Use Which Metric

F1 Score: Imbalanced classes, need balance of precision/recall
ROC-AUC: Binary classification, threshold-independent
RMSE: Regression, penalizes large errors more
MAE: Regression, robust to outliers

💪 Training Modes: "PFIS"

SageMaker training optimization

Think: "PFISh for the best training mode"

  1. Pipe Mode - Stream data from S3 (fast, efficient)
  2. File Mode - Download entire dataset first (simple)
  3. Instance Types - ml.p3 (GPU), ml.c5 (CPU), ml.m5 (balanced)
  4. Spot Training - Save up to 90% on training costs

💰 Cost Optimization

Pipe Mode: Faster, no EBS needed, preferred for large datasets
Spot Training: Use with checkpointing for interruptible workloads
GPU Instances: p3 for training, g4 for inference


🚀 DOMAIN 3: Deployment and Orchestration (22%)

🎯 Deployment Options: "REBAS"

Remember all SageMaker deployment types

Think: "REBASe your model for production"

  1. Real-time Endpoints - Low latency, persistent
  2. Edge (Neo) - Deploy to edge devices (IoT)
  3. Batch Transform - Process large datasets offline
  4. Asynchronous Inference - Long-running requests
  5. Serverless Inference - Auto-scaling, pay per use

🎯 Deployment Selection Guide

Real-time: <1 sec latency, always-on, high cost
Serverless: Intermittent traffic, cold start OK, low cost
Batch Transform: Large batches, no real-time need
Async: Long processing (>60s), queue-based
Edge (Neo): IoT devices, no internet dependency

⚙️ MLOps & Orchestration: "SPEC-SAM"

Remember CI/CD and orchestration services

Think: "Write detailed SPECs for SAM (software)"

  1. SageMaker Pipelines - Native ML pipelines
  2. Projects - MLOps templates (CI/CD)
  3. EventBridge - Event-driven automation
  4. Code* Services - CodePipeline, CodeBuild, CodeDeploy
  5. Step Functions - Workflow orchestration
  6. Airflow (MWAA) - Apache Airflow managed service
  7. Model Registry - Version control for models

🔥 Hot Exam Topic

SageMaker Pipelines: Native, integrated, preferred
Step Functions: AWS-native, visual workflow
Airflow (MWAA): Complex DAGs, existing Airflow code
Model Registry: Track lineage, approve models

⚡ Endpoint Optimization: "MASS-EI"

Endpoint scaling and optimization

Think: "The MASS of data needs EI (elastic inference)"

  1. Multi-Model Endpoints - Host multiple models on one endpoint
  2. Auto Scaling - Scale based on invocations or metrics
  3. Shadow Testing - Test new models with production traffic
  4. Serial Inference Pipeline - Chain multiple models
  5. Elastic Inference (EI) - Attach GPU acceleration
  6. Inferentia - AWS-designed ML chips (cost-effective)

💡 Performance Tips

Multi-Model: Many models, low traffic each
Auto Scaling: Target tracking on InvocationsPerInstance
Shadow Testing: 0% production impact
Inferentia: Up to 70% cost reduction


🛡️ DOMAIN 4: Monitoring, Maintenance & Security (24%)

📊 Monitoring Services: "CM-TAXI"

Remember all monitoring and logging services

Think: "Call a CM (CloudWatch Metrics) TAXI"

  1. CloudWatch - Metrics, logs, alarms
  2. Model Monitor - Detect drift & quality issues
  3. Trusted Advisor - Best practice checks
  4. Athena - Query S3 logs with SQL
  5. X-Ray - Distributed tracing
  6. Inferences (Logs) - Capture prediction data

🎯 Model Monitor Drift Types

Data Quality Drift: Statistical properties change
Model Quality Drift: Accuracy metrics degrade
Bias Drift: Fairness metrics change
Feature Attribution Drift: Feature importance changes

🔒 Security Services: "I-MAKE-VOWS"

Remember all AWS security services

Think: "I MAKE VOWS to secure my ML models"

  1. IAM - Identity and Access Management
  2. Macie - Discover PII in S3
  3. AWS Shield - DDoS protection
  4. KMS - Key Management Service (encryption)
  5. Encryption (at rest & in transit) - S3, EBS, SageMaker
  6. VPC - Virtual Private Cloud (network isolation)
  7. Organizations - Multi-account management
  8. WAF - Web Application Firewall
  9. Secrets Manager - Manage credentials

🔐 Encryption Best Practices

At Rest: S3-SSE, EBS encryption, SageMaker notebook encryption
In Transit: TLS/HTTPS for all data transfer
KMS Keys: Customer-managed keys for compliance
VPC: Use PrivateLink for SageMaker in VPC

🛡️ SageMaker Security: "VINE"

SageMaker-specific security features

Think: "Secure your ML like a VINE protects grapes"

  1. VPC Mode - Network isolation
  2. IAM Roles - Execution roles for notebooks/jobs
  3. Network Isolation - No internet access
  4. Encryption Everywhere - KMS for notebooks, training, endpoints

📊 Visual Diagrams & Decision Trees

🔄 Complete ML Workflow Diagram

┌───────────────────────────────────────────────────┐
│          DATA PREPARATION (28%)                   │
│               S-KEFS-R                            │
└───────────────────────────────────────────────────┘
                      │
                      ▼
┌───────────────────────────────────────────────────┐
│ S3 → Kinesis → Glue/EMR → Data Wrangler →       │
│    Feature Store                                  │
│    (GEEKS-DW)         (BITES-NO)                 │
└───────────────────────────────────────────────────┘
                      │
                      ▼
┌───────────────────────────────────────────────────┐
│          MODEL DEVELOPMENT (26%)                  │
│            BLIINKS-FPXDR                          │
└───────────────────────────────────────────────────┘
                      │
                      ▼
┌───────────────────────────────────────────────────┐
│ Train (PFIS) → Tune (BRAGS) → Evaluate          │
│                                (FARM-CAR)         │
│ XGBoost, Linear Learner, DeepAR, BlazingText    │
└───────────────────────────────────────────────────┘
                      │
                      ▼
┌───────────────────────────────────────────────────┐
│     DEPLOYMENT & ORCHESTRATION (22%)              │
│          REBAS + SPEC-SAM                         │
└───────────────────────────────────────────────────┘
                      │
                      ▼
┌───────────────────────────────────────────────────┐
│ Real-time/Batch/Serverless → Pipelines →        │
│ Auto-scaling                                      │
│ (REBAS)  (SPEC-SAM)  (MASS-EI)                   │
└───────────────────────────────────────────────────┘
                      │
                      ▼
┌───────────────────────────────────────────────────┐
│   MONITORING, MAINTENANCE & SECURITY (24%)        │
│        CM-TAXI + I-MAKE-VOWS                      │
└───────────────────────────────────────────────────┘
                      │
                      ▼
┌───────────────────────────────────────────────────┐
│ CloudWatch → Model Monitor → Drift Detection →  │
│ IAM/KMS                                           │
│ (CM-TAXI)              (I-MAKE-VOWS)             │
└───────────────────────────────────────────────────┘

🎯 Deployment Option Decision Tree

START: Need to deploy a model?
  │
  ├─→ Real-time predictions needed?
  │   │
  │   ├─→ YES → Latency < 1 second?
  │   │    │
  │   │    ├─→ YES → Traffic pattern?
  │   │    │    │
  │   │    │    ├─→ Constant/Predictable
  │   │    │    │   → REAL-TIME ENDPOINT
  │   │    │    │     • Always-on
  │   │    │    │     • Auto-scaling
  │   │    │    │     • ml.m5/c5/p3 instances
  │   │    │    │
  │   │    │    └─→ Intermittent/Unpredictable
  │   │    │        → SERVERLESS INFERENCE
  │   │    │          • Auto-scales to zero
  │   │    │          • Cold start acceptable
  │   │    │          • Pay per invoke
  │   │    │
  │   │    └─→ NO → Processing time > 60 sec?
  │   │         │
  │   │         └─→ YES → ASYNCHRONOUS
  │   │              INFERENCE
  │   │              • Queue-based
  │   │              • S3 trigger
  │   │              • Long-running tasks
  │   │
  │   └─→ NO → Large batch of data?
  │        │
  │        └─→ YES → BATCH TRANSFORM
  │                  • Process entire datasets
  │                  • No endpoint needed
  │                  • Cost-effective for bulk
  │
  └─→ Deploy to edge devices?
      │
      └─→ YES → SAGEMAKER NEO + EDGE
                • Compile for IoT
                • No internet required
                • Optimized inference

📋 Final Review Cheat Sheet (Print Before Exam)

🖨️ PRINT THIS SECTION - REVIEW 30 MINUTES BEFORE EXAM 🖨️

🎯 THE ULTIMATE MASTER SENTENCE

"Use GEEKS-DW to prepare PAJRC data, train with BLIINKS, deploy via REBAS, orchestrate with SPEC-SAM, and monitor using CM-TAXI!"

🔑 All Mnemonics At A Glance

Mnemonic Full Expansion Category
S-KEFS-R S3, Kinesis, EBS, FSx, SageMaker Feature Store, Redshift Storage Services
PAJRC Parquet, Avro, JSON, RecordIO, CSV Data Formats
GEEKS-DW Glue, EMR, EMR Serverless, Kinesis Firehose, SageMaker Processing, Data Wrangler ETL Services
BITES-NO Binning, Imputation, Transforming, Encoding, Scaling, Normalization, Outliers Feature Engineering
MOUD Missing, Outliers, Unbalanced, Duplicates Data Quality
BLIINKS-FPXDR BlazingText, Linear, Image, IP Insights, NTM, KNN, Seq2Seq, Factorization, PCA, XGBoost, DeepAR, RCF SageMaker Algorithms
BRAGS Bayesian, Random, AMT, Grid, Stochastic/Early Stop Hyperparameter Tuning
FARM-CAR F1, Accuracy, ROC, Matrix | R², MAE, RMSE Evaluation Metrics
PFIS Pipe, File, Instance Types, Spot Training Modes
REBAS Real-time, Edge, Batch, Async, Serverless Deployment Options
SPEC-SAM SageMaker Pipelines, Projects, EventBridge, Code*, Step Functions, Airflow, Model Registry MLOps Services
MASS-EI Multi-Model, Auto Scaling, Shadow, Serial, Elastic Inference, Inferentia Endpoint Optimization
CM-TAXI CloudWatch, Model Monitor, Trusted Advisor, Athena, X-Ray, Inference logs Monitoring Services
I-MAKE-VOWS IAM, Macie, Shield, KMS, Encryption, VPC, Organizations, WAF, Secrets Manager Security Services
VINE VPC Mode, IAM Roles, Network Isolation, Encryption Everywhere SageMaker Security

⚠️ Common Exam Traps

❌ TRAP: "Real-time" doesn't always mean Real-time Endpoint

→ Could be Serverless (for intermittent) or Async (for long-running)

❌ TRAP: "Cost-effective" usually means Serverless/Spot/Pipe Mode

→ Not always-on Real-time Endpoints

❌ TRAP: File Mode is NOT always wrong

→ Required for custom code needing random access to data

❌ TRAP: Grid Search is NOT always best for tuning

→ Bayesian is better for complex parameter spaces

❌ TRAP: CSV is NOT best for analytics

→ Parquet is columnar, compressed, and optimized

❌ TRAP: Model Monitor is NOT just CloudWatch

→ It's specifically for drift detection (data, model, bias, feature)

❌ TRAP: XGBoost is NOT for everything

→ DeepAR for time series, RCF for anomalies, BlazingText for text


✍️ Write on Whiteboard/Paper FIRST (During Exam)

GEEKS-DW | PAJRC | BLIINKS-FPXDR

REBAS | SPEC-SAM | CM-TAXI

BRAGS | FARM-CAR | I-MAKE-VOWS


📖 Recommended Study Resources

Official AWS Resources:

  • AWS ML Engineer Exam Guide - Primary source
  • AWS Training and Certification portal
  • AWS Documentation for SageMaker
  • AWS Whitepapers on ML best practices

Practice Tests (Prioritized):

  • Udemy by Stephane Maarek - High-quality questions
  • AWS Skill Builder - Official practice

Comments

Popular posts from this blog

Hacking via Cloning Site Using Kali Linux

Hacking via Cloning Site Using Kali Linux Hacking via Cloning Site Using Kali Linux  SET Attack Method : SET stands for Social Engineering Toolkist , primarily written by  David Kennedy . The Social-Engineer Toolkit (SET) is specifically designed to perform advanced attacks against the human element. SET was designed to be released with the  http://www.social-engineer.org  launch and has quickly became a standard tool in a penetration testers arsenal. The attacks built into the toolkit are designed to be targeted and focused attacks against a person or organization used during a penetration test. Actually this hacking method will works perfectly with DNS spoofing or Man in the Middle Attack method. Here in this tutorial I’m only writing how-to and step-by-step to perform the basic attack , but for the rest you can modified it with your own imagination. In this tutorial we will see how this attack methods can owned your com...

Defacing Sites via HTML Injections (XSS)

Defacing Sites via HTML Injections Defacing Sites via HTML Injections What Is HTML Injection: "HTML Injection" is called as the Virtual Defacement Technique and also known as the "XSS" Cross Site Scripting. It is a very common vulnerability found when searched for most of the domains. This kind of a Vulnerability allows an "Attacker" to Inject some code into the applications affected in order to bypass access to the "Website" or to Infect any particular Page in that "Website". HTML injections = Cross Site Scripting, It is a Security Vulnerability in most of the sites, that allows an Attacker to Inject HTML Code into the Web Pages that are viewed by other users. XSS Attacks are essentially code injection attacks into the various interpreters in the browser. These attacks can be carried out using HTML, JavaScript, VBScript, ActiveX, Flash and other clinet side Languages. Well crafted Malicious Code can even hep the ...

Hacking DNN Based Web Sites

Hacking DNN Based Web Sites Hacking DNN Based Web Sites Hacking DNN (Dot Net Nuke) CMS based websites is based on the Security Loop Hole in the CMS. For using that exploit we will see the few mentioned points which illustrates us on how to hack any live site based on Dot Net Nuke CMS. Vulnerability : This is the know Vulnerability in Dot Net Nuke (DNN) CMS. This allows aone user to Upload a File/Shell Remotely to hack that Site which is running on Dot Net Nuke CMS. The Link's for more Information regarding this Vulnerability is mentioned below -                                  http://www.exploit-db.com/exploits/12700/ Getting Started : Here we will use the Google Dork to trace the sites that are using DNN (Dot Net Nuke) CMS and are vulnerable to Remote File Upload. How To Do It : Here, I an mentioning the few points on how to Search for the existing Vulnerability in DNN. Let'...

Excellent tricks and techniques of Google Hacks

Frontpage.. very nice clean search results listing !! I magine with me that you can steal or know the password of any web site designed by "Frontpage". But the file containing the password might be encrypted; to decrypt the file download the program " john the ripper". To see results; just write in the ( http://www.google.com/ ) search engine the code: "# -FrontPage-" inurl:service.pwd ============================================== This searches the password for "Website Access Analyzer", a Japanese software that creates webstatistics. To see results; just write in the ( http://www.google.com/ ) search engine the code: "AutoCreate=TRUE password=*" ============================================== This is a query to get inline passwords from search engines (not just Google), you must type in the query followed with the the domain name without the .com or .net. To see results; just write in the ( http://www.google.co...

Hacking via BackTrack using SET Attack Method

Hacking via BackTrack using SET Attack Method Hacking via BackTrack using SET Attack  1. Click on Applications, BackTrack, Exploit Tools, Social Engineering Tools, Social Engineering Toolkit then select set.