<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Code with Dan</title><description>Dan Wahlin — AI, Angular, Docker, Kubernetes, TypeScript, and developer tools.</description><link>https://blog.codewithdan.com/</link><item><title>Get started with GitHub Copilot CLI: A free, hands-on course</title><link>https://blog.codewithdan.com/github-copilot-cli-for-beginners/</link><guid isPermaLink="true">https://blog.codewithdan.com/github-copilot-cli-for-beginners/</guid><pubDate>Mon, 02 Mar 2026 00:00:00 GMT</pubDate><content:encoded>![GitHub Copilot CLI for Beginners](/images/blog/github-copilot-cli-for-beginners/copilot-banner.png)

GitHub Copilot has grown well beyond code completions in your editor. It now lives in your terminal, too. [GitHub Copilot CLI](https://docs.github.com/copilot/how-tos/copilot-cli) lets you review code, generate tests, debug issues, and ask questions about your projects without ever leaving the command line.

To help developers get up to speed, we put together a free, open source course: **[GitHub Copilot CLI for Beginners](https://github.com/github/copilot-cli-for-beginners)**. It&apos;s 8 chapters, hands-on from the start, and designed so you can go from installation to building real workflows in a few hours. **Already have a GitHub account?** GitHub Copilot CLI works with [GitHub Copilot Free](https://github.com/features/copilot/plans), which is available to all personal GitHub accounts.

In this post, I&apos;ll walk through what the course covers and how to get started.

## What GitHub Copilot CLI can do

If you haven&apos;t tried it yet, GitHub Copilot CLI is a conversational AI assistant that runs in your terminal. You point it at files using `@` references, and it reads your code and responds with analysis, suggestions, or generated code.

You can use it to:

- Review a file and get feedback on code quality
- Generate tests based on existing code
- Debug issues by pointing it at a file and asking what&apos;s wrong
- Explain unfamiliar code or confusing logic
- Generate commit messages, refactor functions, and more
- Write new app features (front-end, APIs, database interactions, and more)

It remembers context within a conversation, so follow-up questions build on what came before.

## What the course covers

The course is structured as 8 progressive chapters. Each one builds on the last, and you work with the same project throughout: a book collection management app. Instead of jumping between isolated snippets, you keep improving one codebase as you go.

Here&apos;s what using GitHub Copilot CLI looks like in practice. Say you want to review a Python file for potential issues. Start up Copilot CLI and ask what you&apos;d like done:

```bash
$ copilot
&gt; Review @samples/book-app-project/books.py for potential improvements. Focus on error handling and code quality.
```

Copilot reads the file, analyzes the code, and gives you specific feedback right in your terminal.

![Animated demo of GitHub Copilot CLI reviewing code in the terminal](/images/blog/github-copilot-cli-for-beginners/code-review-demo.gif)

Here are the chapters covered in the course:

1. **Quick Start** — Installation and authentication
2. **First Steps** — Learn the three interaction modes: interactive, plan, and one-shot (programmatic)
3. **Context and Conversations** — Using `@` references to point Copilot at files and directories, plus session management with `--continue` and `--resume`
4. **Development Workflows** — Code review, refactoring, debugging, test generation, and Git integration
5. **Custom Agents** — Building specialized AI assistants with `.agent.md` files (for example, a Python reviewer that always checks for type hints)
6. **Skills** — Creating task-specific instructions that auto-trigger based on your prompt
7. **MCP Servers** — Connecting Copilot to external services like GitHub repos, file systems, and documentation APIs via the Model Context Protocol
8. **Putting It All Together** — Combining agents, skills, and MCP servers into complete development workflows

![GitHub Copilot CLI Learning Path](/images/blog/github-copilot-cli-for-beginners/learning-path.png)

Every command in the course can be copied and run directly. No AI or machine learning background is required.

## Who this is for

The course is built for:

- **Developers using terminal workflows.** If you&apos;re already running builds, checking git status, and SSHing into servers from the command line, Copilot CLI fits right into that flow.
- **Teams looking to standardize AI-assisted practices.** Custom agents and skills can be shared across a team through a project&apos;s `.github/agents` and `.github/skills` directories.
- **Students and early-career developers.** The course explains AI terminology as it comes up, and every chapter includes assignments with clear success criteria.

You don&apos;t need prior experience with AI tools. If you can run commands in a terminal, you can learn and apply the concepts in this course.

## How the course teaches

Each chapter follows a consistent pattern: a real-world analogy to ground the concept, then the core technical material, then hands-on exercises. For instance, the three interaction modes are compared to ordering food at a restaurant. Plan mode is like mapping your route before you start driving. Interactive mode is a back-and-forth conversation with a waiter. One-shot mode (programmatic mode) is like going through a drive-through.

![The ordering food analogy for Copilot CLI interaction modes](/images/blog/github-copilot-cli-for-beginners/ordering-food-analogy.png)

Later chapters use different comparisons: agents are like hiring specialists, skills work like attachments for a power drill, and MCP servers are compared to browser extensions. The goal is to give you a mental model before the technical details land.

The course also focuses on a question that&apos;s harder than it looks: *when should I use which tool?* Knowing the difference between reaching for an agent, a skill, or an MCP server takes practice, and the final chapter walks through that decision-making in a realistic workflow.

![Integration pattern: Gather Context, Analyze and Plan, Execute, Complete](/images/blog/github-copilot-cli-for-beginners/integration-pattern.png)

## Get started

The course is free and open source. You can clone the repo, or [open it in GitHub Codespaces](https://codespaces.new/github/copilot-cli-for-beginners?hide_repo_select=true&amp;ref=main&amp;quickstart=true) for a fully configured environment.

**[GitHub Copilot CLI for Beginners](https://github.com/github/copilot-cli-for-beginners)**

For a quick reference, see the [CLI command reference](https://docs.github.com/copilot/reference/cli-command-reference).

Subscribe to [GitHub Insider](https://resources.github.com/newsletter/) for more developer tips and guides.</content:encoded></item><item><title>Point, Click, Let AI Fix It: How I Built ZingIt Using the GitHub Copilot SDK</title><link>https://blog.codewithdan.com/point-click-ai-fixes-it-how-i-built-zingit-using-the-github-copilot-sdk/</link><guid isPermaLink="true">https://blog.codewithdan.com/point-click-ai-fixes-it-how-i-built-zingit-using-the-github-copilot-sdk/</guid><pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/point-click-ai-fixes-it-how-i-built-zingit-using-the-github-copilot-sdk/image-1024x264.webp)](https://blog.codewithdan.com/wp-content/uploads/2026/02/image.png)

Have you ever tried to describe a UI bug to someone? “See that button… no, the OTHER button… the one with the blue border… well, it’s kind of blue… anyway, make it bigger.” It’s painful. Screenshots help, but then you’re copying and pasting them into chat windows and hoping your AI assistant understands which pixel you’re pointing at.

I got tired of this overall process and decided to build a tool called ZingIt that lets you point at elements on a webpage, mark them with notes, and send them directly to AI for automatic fixes. No more describing. No more screenshots flying around. Just _Point → Click → Describe → ZingIt to AI_. If you&apos;ve ever used [frame.io](https://frame.io) before (which provided some of the inspiration - more on that later), you&apos;ll know you can click on a timeline, add a marker, and then add your comments. Similar concept here except you can add comments about the UI and then hand them off to AI!

I couldn’t have built it without the [GitHub Copilot SDK](https://github.com/github/copilot-sdk). The SDK turned what would have been a complex, messy integration into something surprisingly elegant. Let me walk you through what ZingIt does and how the SDK made it possible.

## **ZingIt in Action**

The idea is simple, you’re running your app locally in development. You spot something that needs changing in the browser. Maybe a typo in a heading, maybe a button that needs a different color, maybe some padding that’s off. Instead of switching to your code editor and hunting for the right file, you:

1. Press Z to enter ZingIt&apos;s mark mode. That displays the ZingIt toolbar.

3. Select the element or elements to modify

5. Type your instructions: “Make this heading blue” or “Increase padding to 24px” or &quot;create a blue linear gradient&quot;

7. Click the sparkle icon (✨) in the ZingIt toolbar

9. Watch the AI find the code and make the change

[![](/images/blog/point-click-ai-fixes-it-how-i-built-zingit-using-the-github-copilot-sdk/image-1-1024x878.webp)](https://blog.codewithdan.com/wp-content/uploads/2026/02/image-1.png)

_Using ZingIt to select the element you&apos;d like to change._ You&apos;ll see the ZingIt toolbar at the bottom of the page.

That’s it. The AI gets the CSS selector, the HTML context, a screenshot of the element (if you choose), and your instructions. It searches your codebase, finds the right file, and makes the edit.

All you have to do is select the element, type in what you want to do, take an optional screenshot of the element by checking a checkbox, saving the &quot;marker&quot;, and then clicking the ✨ icon to kick off the AI process.

[![](/images/blog/point-click-ai-fixes-it-how-i-built-zingit-using-the-github-copilot-sdk/image-2-1024x878.webp)](https://blog.codewithdan.com/wp-content/uploads/2026/02/image-2.png)

_Adding a marker with notes and screenshot preview_

### **Multi-Agent Support**

ZingIt doesn’t lock you into one AI assistant. It supports three major coding agents:

- **Claude Code**

- **GitHub Copilot CLI**

- **OpenAI Codex**

You pick your agent when you connect, and ZingIt handles the rest. This type of choice was important to me because different developers have different AI subscriptions, and I didn’t want to force anyone into a specific ecosystem (although I highly recommend you check out [GitHub Copilot CLI](https://github.com/features/copilot/cli/)!). While it currently support the 3 AI assistants mentioned above, it&apos;s certainly possible to extend it even further and add more in the future.

[![](/images/blog/point-click-ai-fixes-it-how-i-built-zingit-using-the-github-copilot-sdk/image-3-1024x790.webp)](https://blog.codewithdan.com/wp-content/uploads/2026/02/image-3.png)

_Choose your AI agent: Claude Code, GitHub Copilot CLI, or OpenAI Codex_

## **The GitHub Copilot SDK: The Lightbulb Moment**

I’d been thinking about this problem for quite a while. Over the 2025 holiday break I was asked to help review a new commercial that was being created and used frame.io to add &quot;markers&quot; and comments. Then video editors would act upon those and update the video.

[![](/images/blog/point-click-ai-fixes-it-how-i-built-zingit-using-the-github-copilot-sdk/image-4.webp)](https://blog.codewithdan.com/wp-content/uploads/2026/02/image-4.png)

That got me thinking, &quot;What if we could do something similar for the UI, but then have AI kick in and help?&quot;.  
  
Of course, if a task was fairly straightforward I&apos;d find the file and make the modification myself of course (yes - I still code), but often times it was more involved and I wanted the AI assistant&apos;s help. But, that meant that every time I wanted to make a UI change, I’d go through the same tedious routine I mentioned earlier: type out a detailed prompt describing the element, take a screenshot when needed, paste it in, hope the AI understood which thing I was pointing at. It worked, but it was slow and clunky.

When the [GitHub Copilot SDK](https://github.com/github/copilot-sdk) came out, I started digging into what it could do. Session management, streaming, image attachments, permission handling, and more. That’s when I had the &quot;lightbulb moment&quot; and realized I could build something that captures all that context automatically and sends it directly to the AI. No more manual screenshots. No more verbose descriptions. Just point at the thing and tell it what you want done.

So, the Copilot SDK is what actually turned a major pain point into a workable solution for me. Once I got it working, I decided to add support for Claude Code and OpenAI Codex as well since they also have SDKs.

Here’s what the SDK gives you out of the box:

- **Session management**: Create, resume, and destroy sessions cleanly

- **Streaming responses**: Watch the AI work in real-time

- **Permission handling**: Control what the AI can read and write

- **Image attachments**: Send screenshots alongside your prompts

- **Auto-restart**: Handle disconnects gracefully

## **How It All Fits Together**

Before diving into some code, here’s a quick look at how ZingIt works:

1. **Browser**: You mark elements on your page. ZingIt captures the selector, HTML context, and a screenshot. The idea for that came from working on projects that used frame.io which lets you add comments at specific points in a video.

3. **WebSocket**: The browser sends that data through a WebSocket to a Node.js server running locally.

5. **Copilot SDK**: The Node server hands everything off to the Copilot SDK, which talks to the AI, streams responses back, and executes file edits.

7. **Back to the browser**: The server relays the AI’s progress over the WebSocket so you can watch it work in real-time.

The SDK handles the hard part: communicating with the AI, managing sessions, and streaming responses. The WebSocket layer just shuttles messages between your browser and the SDK.

Let me walk you through how ZingIt uses these features.

### **Initializing the Client**

Setting up the Copilot client is really simple. Create a new **CopilotClient** instance and you&apos;re off and running.

```typescript
import { CopilotClient } from &apos;@github/copilot-sdk&apos;;

export class CopilotAgent extends BaseAgent {
  name = &apos;copilot&apos;;
  model: string;
  private client: CopilotClient | null = null;

  constructor() {
    super();
    this.model = process.env.COPILOT_MODEL || &apos;claude-sonnet-4-20250514&apos;;
  }

  async start(): Promise&lt;void&gt; {
    this.client = new CopilotClient({
      logLevel: &apos;info&apos;,
      autoRestart: true,  // Handles disconnects gracefully
    });

    await this.client.start();
    console.log(`✓ Copilot SDK initialized (model: ${this.model})`);
  }
}
```

The **autoRestart: true** option is a lifesaver. In a tool like ZingIt where users might step away mid-session, having the SDK automatically handle reconnection means fewer error states to manage.

### **Creating Sessions with Context**

When a user connects from ZingIt’s browser UI, we create a session with all the context the AI needs:

```typescript
async createSession(wsRef: WebSocketRef, projectDir: string,
                    resumeSessionId?: string): Promise&lt;AgentSession&gt; {
  const sessionConfig = {
    model: this.model,
    streaming: true,
    systemMessage: {
      mode: &apos;append&apos; as const,
      content: `
        You are a UI debugging assistant working in the project directory: ${projectDir}
       
        When given markers about UI elements:
        1. Search for the corresponding code using the selectors and HTML context provided
        2. Make the requested changes in the project at ${projectDir}
        3. Be thorough in finding the right files and making precise edits
       
        When screenshots are provided, use them to:
        - Better understand the visual context and styling of the elements
        - Identify the exact appearance that needs to be changed
        - Verify you&apos;re targeting the correct element based on its visual representation
      `
    },
    onPermissionRequest: async (request: any) =&gt; {
      // Auto-approve read/write operations for file edits
      if (request.kind === &apos;read&apos; || request.kind === &apos;write&apos;) {
        return { kind: &apos;approved&apos; as const };
      }
      return { kind: &apos;approved&apos; as const };
    },
  };

  // Resume existing session if we have a sessionId, otherwise create new
  const session = resumeSessionId
    ? await this.client.resumeSession(resumeSessionId, sessionConfig)
    : await this.client.createSession(sessionConfig);
```

Two things worth highlighting here aside from the prompt:

**Session resumption**: If a user’s WebSocket disconnects and reconnects (happens more than you’d think), we can resume right where we left off instead of starting fresh. The AI remembers the conversation context.

**Permission handling**: The onPermissionRequest callback lets you control what the AI can do. For ZingIt, we auto-approve file reads and writes since that’s the whole point. In other scenarios, you might want to prompt the user for approval.

### **Streaming Events to the Browser**

ZingIt shows real-time progress as the AI works. The SDK’s event system makes this straightforward and the ZingIt agent panel displays the agent messages that are received by the browser.

```typescript
const unsubscribe = session.on((event) =&gt; {
  switch (event.type) {
    case &apos;assistant.message_delta&apos;:
      // Streaming chunk - send to browser
      send({ type: &apos;delta&apos;, content: event.data.deltaContent });
      break;

    case &apos;tool.execution_start&apos;:
      console.log(&apos;[Copilot Agent] Tool executing:&apos;, event.data.toolName);
      send({ type: &apos;tool_start&apos;, tool: event.data.toolName });
      break;

    case &apos;tool.execution_complete&apos;:
      send({ type: &apos;tool_end&apos;, tool: event.data.toolCallId });
      break;

    case &apos;session.idle&apos;:
      // AI finished working
      send({ type: &apos;idle&apos; });
      break;

    case &apos;session.error&apos;:
      send({ type: &apos;error&apos;, message: event.data.message });
      break;
  }
});
```

This event-driven approach means the browser UI can show exactly what’s happening: “Searching files…”, “Editing src/components/Header.tsx…”, “Done!” Users aren’t left wondering if something froze.

### **Sending Screenshots with Prompts**

Here’s where it gets interesting. ZingIt captures screenshots of the marked elements and sends them alongside the text prompt. The SDK handles this via file attachments:

```typescript
send: async (msg: { prompt: string; images?: ImageContent[] }) =&gt; {
  const attachments: Array&lt;{ type: &apos;file&apos;; path: string; displayName?: string }&gt; = [];

  if (msg.images &amp;&amp; msg.images.length &gt; 0) {
    const tempDir = os.tmpdir();

    for (let i = 0; i &lt; msg.images.length; i++) {
      const img = msg.images[i];
      const ext = img.mediaType.split(&apos;/&apos;)[1] || &apos;png&apos;;
      const tempPath = path.join(tempDir, `zingit-screenshot-${randomUUID()}.${ext}`);

      // Decode base64 and save to temp file
      const buffer = Buffer.from(img.base64, &apos;base64&apos;);
      await fs.writeFile(tempPath, buffer, { mode: 0o600 });

      attachments.push({
        type: &apos;file&apos;,
        path: tempPath,
        displayName: img.label || `Screenshot ${i + 1}`
      });
    }
  }

  await session.sendAndWait({
    prompt: msg.prompt,
    attachments: attachments.length &gt; 0 ? attachments : undefined
  });
}
```

The **displayName** option is a nice touch. It lets the AI know what each screenshot represents (“Screenshot of Marker 1: button.primary”) instead of just seeing a generic filename.

As of today the tempfile is needed (that&apos;s not the case with some other SDKs which let you work with base64 directly) but I&apos;m hopeful that may change in the future. It&apos;s not a big deal in this case since once a process completes the code cleans up tempfiles and this is all running on a single developer machine.

## **Why This Matters**

The GitHub Copilot SDK abstracts away the hard parts of AI integration:

- **Authentication**: Uses your existing GitHub Copilot CLI subscription

- **Model selection**: Switch between Claude, GPT, and others through config

- **Streaming**: Built-in support for real-time responses

- **Session persistence**: Resume conversations without losing context

- **Multimodal input**: Text and images in the same request

Without the SDK, ZingIt would have been a weekend hack that worked on my machine and probably nowhere else. With the SDK, it’s a tool that other developers can actually benefit from as well.

## **Try It Yourself**

Try it out by visiting the link below and following the steps or check out the GitHub repo to access all the code behind ZingIt:

Website: [https://danwahlin.github.io/zingit](https://danwahlin.github.io/zingit)  
Repo: [https://github.com/DanWahlin/zingit](https://github.com/DanWahlin/zingit)

The [Copilot SDK](https://github.com/github/copilot-sdk) turned what I expected to be a multi-week integration into something I got running in an evening. A few days later with the help of [GitHub Copilot CLI](https://github.com/features/copilot/cli/), ZingIt was released. It&apos;s been a fun project to work on.

* * *

**Found this helpful?** Follow me for more AI dev tools content:

- Twitter: [@danwahlin](https://twitter.com/danwahlin)

- LinkedIn: [Dan Wahlin](https://www.linkedin.com/in/danwahlin)

- GitHub: [DanWahlin](https://github.com/DanWahlin)</content:encoded></item><item><title>OSS AI Summit: Building with LangChain</title><link>https://blog.codewithdan.com/oss-ai-summit-building-with-langchain/</link><guid isPermaLink="true">https://blog.codewithdan.com/oss-ai-summit-building-with-langchain/</guid><pubDate>Mon, 01 Dec 2025 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/oss-ai-summit-building-with-langchain/image-1024x400.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/12/image.png)

Most AI demos look impressive in a notebook, but they fall apart the moment they touch real data, real users, or real scale. The companies that will win in 2026 aren’t the ones with the flashiest prototypes, they’re the ones who can reliably design, debug, and deploy agent-powered AI applications.

That’s exactly why we created the OSS AI Summit.

On December 10th we’re bringing together people from LangChain and Microsoft for a focused, no-fluff 2-hour online event centered on LangChain v1 and the patterns that turn experiments into production systems.

#### What you’ll walk away with

- A clear mental model of LangChain v1: how components, agents, tools, and memory actually fit together in Python and JavaScript

- Real-world war stories from teams running agents to solve real-world problems (including a candid fireside chat with people from Intercom)

- Live walkthrough of MCP (Model Context Protocol) powering single and multi-agent systems with LangChain.js

- Practical demos you can run today including agents that query databases, call APIs, and coordinate across specialized roles

- A Q&amp;A panel with Hunter Lovell and Sydney Runkle from LangChain

#### Try the code before (or after) the event

We’re sharing three complete reference apps so you can explore the concepts hands-on:

• AI Sales Analyst – Python agent that analyzes real sales data in PostgreSQL using LangChain + Azure OpenAI + MCP

[https://github.com/Azure-Samples/langchain-agent-python](https://github.com/Azure-Samples/langchain-agent-python)

• AI Travel Agency – Multi-agent system in LangChain.js with MCP servers in Python, Node.js, Java, and .NET, deployed on Azure Container Apps

[https://github.com/Azure-Samples/ai-travel-agents](https://github.com/Azure-Samples/ai-travel-agents)

• Serverless Burger-Order Agent – End-to-end LangChain.js agent using MCP to place orders via a real API, running on Azure Static Web Apps + Azure Functions

[https://github.com/Azure-Samples/mcp-agent-langchainjs](https://github.com/Azure-Samples/mcp-agent-langchainjs)

#### Who this is for

- Developers moving from simple chatbots to agentic workflows

- Architects figuring out how to connect LLMs to internal systems securely

- Engineering leads who need proven patterns for reliability, observability, and scale on Azure

**Date:** December 10, 2025  
**Time:** 8:00 – 10:00 AM Pacific Time  
**Format:** Free live stream  
**Register:** [https://aka.ms/OSSAISummitRegistration](https://aka.ms/OSSAISummitRegistration)

We’ll see you there. 🚀</content:encoded></item><item><title>🚀 Leveling Up Your AI Agents: A Story-Driven Guide to MCP Tools, Resources, Prompts, and Logging</title><link>https://blog.codewithdan.com/leveling-up-your-ai-agents-a-story-driven-guide-to-mcp-tools-resources-prompts-and-logging/</link><guid isPermaLink="true">https://blog.codewithdan.com/leveling-up-your-ai-agents-a-story-driven-guide-to-mcp-tools-resources-prompts-and-logging/</guid><pubDate>Mon, 21 Jul 2025 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/leveling-up-your-ai-agents-a-story-driven-guide-to-mcp-tools-resources-prompts-and-logging/mcp-capabilities-2-1024x683.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/07/mcp-capabilities-2.jpg)

AI copilots are getting smarter. They can chat, reason, and even follow complex instructions. But if you want your AI to actually do something useful, like trigger a function, fetch some data, or keep a log of what happened, you need more than just prompts. You need structure and you need context. You need something like the **Model Context Protocol (MCP)**.

In this post, I’ll walk you through MCP’s four core capabilities (tools, resources, prompts, logging) using a fun story about _Contextia_, an AI copilot assisting Commander Alex on their ship. I put this together because when I first started with MCP, it wasn’t immediately obvious when to use tools versus resources, or how prompts and logging fit into the bigger picture.

If you’re working with [GitHub Copilot Agent Mode](https://code.visualstudio.com/blogs/2025/04/07/agentMode), another AI-powered assistant, or building your own agent architecture, this post will (hopefully) help clarify how and why you&apos;d use each capability.

**_A Quick Sidebar on MCP_**

_MCP is mentioned frequently in AI conversations these days, but is it always necessary? As with any technology, it&apos;s important to choose the right tool for the right job. In some situations, a direct API call combined with insights from an LLM is enough and might be more efficient than using MCP. In other cases, MCP can be essential for exposing tools and data in a structured and secure way that enhances your AI systems. As with all technology, it&apos;s important to understand your use case, do your research, evaluate your options, and make an informed decision._ _Having said that, this post is all about MCP&apos;s core capabilities, so let&apos;s get back to it!_

## 🛠️ Tools: Executing Commands in the Engineering Bay

### Story: The Power Reactor Problem

Commander Alex is orbiting Planet Xylon when a red alert flashes. The ship’s power reactor is overheating.

“_Contextia_,” Alex says, “reroute power from the main thrusters to life support.”

Without hesitation, __Contextia__ calls the internal reroute function. Seconds later, the alert clears and oxygen levels stabilize.

Alex gives a thumbs-up. “Nice save.”

### Why This Matters

The `tools` capability lets your MCP server expose functions the model can call. These functions might send an email, deploy an app, query a database, or perform internal logic. It’s the agent’s way of saying, “Perform this action!”

_TypeScript Example_

```typescript

import { Server } from &quot;@modelcontextprotocol/sdk/server&quot;;
import {
  ListToolsRequestSchema,
  CallToolRequestSchema
} from &quot;@modelcontextprotocol/sdk/types&quot;;

const server = new Server(
  { 
    name: &quot;ship-control&quot;, 
    version: &quot;1.0.0&quot; 
  },
  { 
    capabilities: { 
      tools: {} 
    } 
  }
);

// Define available tools
server.setRequestHandler(ListToolsRequestSchema, async () =&gt; {
  return {
    tools: [
      {
        name: &quot;reroute_power&quot;,
        description: &quot;Reroutes ship power between systems&quot;,
        inputSchema: {
          type: &quot;object&quot;,
          properties: {
            from: { type: &quot;string&quot;, description: &quot;Source system&quot; },
            to: { type: &quot;string&quot;, description: &quot;Destination system&quot; }
          },
          required: [&quot;from&quot;, &quot;to&quot;]
        }
      },
      {
        name: &quot;scan_system&quot;,
        description: &quot;Scans a star system for threats&quot;,
        inputSchema: {
          type: &quot;object&quot;,
          properties: {
            system_id: { type: &quot;string&quot;, description: &quot;System identifier&quot; },
            scan_type: { 
              type: &quot;string&quot;, 
              enum: [&quot;basic&quot;, &quot;deep&quot;, &quot;tactical&quot;],
              description: &quot;Type of scan to perform&quot;
            }
          },
          required: [&quot;system_id&quot;]
        }
      }
    ]
  };
});

// Handle tool execution
server.setRequestHandler(CallToolRequestSchema, async (request) =&gt; {
  const { name, arguments: args } = request.params;
  
  switch (name) {
    case &quot;reroute_power&quot;:
      const { from, to } = args as { from: string; to: string };
      // Simulate power rerouting logic
      const powerLevel = Math.floor(Math.random() * 100);
      return {
        content: [
          {
            type: &quot;text&quot;,
            text: `Successfully rerouted ${powerLevel}% power from ${from} to ${to}. Systems stabilized.`
          }
        ]
      };

    case &quot;scan_system&quot;:
      const { system_id, scan_type = &quot;basic&quot; } = args as { 
        system_id: string; 
        scan_type?: string; 
      };
      // Simulate system scan
      const threats = scan_type === &quot;deep&quot; ? 
        [&quot;2 asteroid fields&quot;, &quot;1 ion storm&quot;] : 
        [&quot;Clear navigation path&quot;];
      return {
        content: [
          {
            type: &quot;text&quot;,
            text: `${scan_type.toUpperCase()} scan of system ${system_id} complete. Detected: ${threats.join(&quot;, &quot;)}`
          }
        ]
      };

    default:
      throw new Error(`Unknown tool: ${name}`);
  }
});
```

## 📦 Resources: Pulling Data from the Ship’s Knowledge Bank

### Story: Navigating the Nebula

The crew spots a mysterious cloud formation ahead. Commander Alex squints at the main display.

“_Contextia_, have we seen this nebula before?”

_Contextia_ quickly searches the Galactic Survey Archive and finds a file labeled `nebula-273.json`. After reading the star chart data, _Contextia_ replies:

“It’s a Class B ionized gas cloud. Radiation levels are stable, and the density won’t affect navigation.”

Alex smiles. “Perfect. Let’s chart a course through it.”

A few moments later, another crew member chimes in. “Couldn’t we just run a tool to scan it instead?”

Alex shakes their head. “No need. The data already exists. We don’t need to scan. We just need the facts.”

### Why This Matters

MCP `resources` are all about providing context. These are files, URLs, or embedded data sources the model can referenc, but not change.

Unlike tools, which perform actions, resources provide read-only data. Use them when you want the model to _understand_ or _look something up_, rather than _do_ something.

This is great for exposing:

- JSON config files

- Markdown docs

- Vector embeddings

- Product catalogs

- Data in a database

- Read-only API data

- Anything the model should _see_, but not touch

_TypeScript Example_

```typescript
import { Server } from &quot;@modelcontextprotocol/sdk/server&quot;;
import {
  ListResourcesRequestSchema,
  ReadResourceRequestSchema
} from &quot;@modelcontextprotocol/sdk/types&quot;;

const server = new Server(
  { 
    name: &quot;galactic-mapper&quot;, 
    version: &quot;1.0.0&quot; 
  },
  { 
    capabilities: { 
      resources: {} 
    } 
  }
);

// List available resources
server.setRequestHandler(ListResourcesRequestSchema, async () =&gt; {
  return {
    resources: [
      {
        uri: &quot;file:///data/nebula-273.json&quot;,
        name: &quot;Nebula 273 Star Chart&quot;,
        description: &quot;Detailed star chart data for nebula-273&quot;,
        mimeType: &quot;application/json&quot;
      },
      {
        uri: &quot;file:///data/ship-status.log&quot;,
        name: &quot;Ship Status Log&quot;,
        description: &quot;Real-time ship systems status&quot;,
        mimeType: &quot;text/plain&quot;
      }
    ]
  };
});

// Read resource contents
server.setRequestHandler(ReadResourceRequestSchema, async (request) =&gt; {
  const { uri } = request.params;
  
  switch (uri) {
    case &quot;file:///data/nebula-273.json&quot;:
      try {
        const nebulaData = {
          id: &quot;nebula-273&quot;,
          classification: &quot;Class B ionized gas cloud&quot;,
          coordinates: { x: 2847, y: 1592, z: 394 },
          radiation_level: &quot;stable&quot;,
          density: &quot;low&quot;,
          navigation_safety: &quot;safe&quot;,
          mineral_content: [&quot;helium&quot;, &quot;hydrogen&quot;, &quot;trace lithium&quot;],
          discovered: &quot;2387.156&quot;
        };
        return {
          contents: [
            {
              uri,
              mimeType: &quot;application/json&quot;,
              text: JSON.stringify(nebulaData)
            }
          ]
        };
      } catch (error) {
        throw new Error(`Failed to read nebula data: ${error}`);
      }

    case &quot;file:///data/ship-status.log&quot;:
      const logData = `
[2387.234] POWER: All systems nominal - 98% efficiency
[2387.234] LIFE_SUPPORT: Oxygen 21%, CO2 0.04% - OPTIMAL
[2387.235] NAVIGATION: Course locked to Nebula-273
[2387.235] SHIELDS: 100% - No threats detected
[2387.236] ENGINES: Warp core stable - Ready for jump
      `.trim();
      return {
        contents: [
          {
            uri,
            mimeType: &quot;text/plain&quot;,
            text: logData
          }
        ]
      };

    default:
      throw new Error(`Resource not found: ${uri}`);
  }
});
```

## 📝 Prompts: Federation-Approved Communication Templates

### Story: Sending a Diplomatic Message

After landing on a peaceful planet, Alex wants to send a greeting to its leader, Chancellor Vira.

“_Contextia_, send a standard welcome message. Use our Federation protocol.”

_Contextia_ pulls up a communication template, fills in the planet and title fields, and generates a polished message that strikes the right tone. It gets transmitted moments later.

The Chancellor responds with gratitude and offers safe passage across their system.

### Why This Matters

MCP `prompts` allow you to predefine prompt templates that the AI can use in a structured way.

This keeps responses clean, on-brand, and reliable. This is especially useful if you&apos;re generating things like:

- Welcome emails

- Issue triage messages

- GitHub PR summaries

- Code review feedback

- Internal reports

It also reduces the risk of the model hallucinating random or off-brand messaging.

_TypeScript Example_

```typescript
import { Server } from &quot;@modelcontextprotocol/sdk/server&quot;;
import {
  ListPromptsRequestSchema,
  GetPromptRequestSchema
} from &quot;@modelcontextprotocol/sdk/types&quot;;

const PROMPTS = {
  &quot;diplomatic-greeting&quot;: {
    name: &quot;diplomatic-greeting&quot;,
    description: &quot;Generate a formal diplomatic greeting message&quot;,
    arguments: [
      {
        name: &quot;recipient_name&quot;,
        description: &quot;Name of the diplomatic contact&quot;,
        required: true
      },
      {
        name: &quot;planet_name&quot;, 
        description: &quot;Name of the planet or system&quot;,
        required: true
      },
      {
        name: &quot;purpose&quot;,
        description: &quot;Purpose of the diplomatic contact&quot;,
        required: false
      }
    ]
  }
};

const server = new Server(
  { 
    name: &quot;diplomatic-comms&quot;, 
    version: &quot;1.0.0&quot; 
  },
  { 
    capabilities: { 
      prompts: {} 
    } 
  }
);

// List available prompts
server.setRequestHandler(ListPromptsRequestSchema, async () =&gt; {
  return {
    prompts: Object.values(PROMPTS)
  };
});

// Get specific prompt with arguments
server.setRequestHandler(GetPromptRequestSchema, async (request) =&gt; {
  const { name, arguments: args } = request.params;
  const prompt = PROMPTS[name as keyof typeof PROMPTS];
  
  if (!prompt) {
    throw new Error(`Prompt not found: ${name}`);
  }

  if (name === &quot;diplomatic-greeting&quot;) {
    const { recipient_name, planet_name, purpose } = args || {};
    
    return {
      messages: [
        {
          role: &quot;user&quot;,
          content: {
            type: &quot;text&quot;,
            text: `Generate a formal diplomatic greeting message:

Recipient: ${recipient_name}
Planet/System: ${planet_name}
Purpose: ${purpose || &quot;General diplomatic relations&quot;}

The message should:
- Follow diplomatic protocols
- Be respectful and professional
- Establish peaceful intentions
- Offer appropriate cooperation

Please format as an official diplomatic transmission.`
          }
        }
      ]
    };
  }
  
  throw new Error(`Prompt implementation not found for: ${name}`);
});
```

## 📜 Logging: Keeping a Captain’s Log

### Story: Reviewing the Mission

Before ending their shift, Alex checks in with _Contextia_.

“Can you show me what we accomplished today?”

_Contextia_ opens the ship’s digital logbook and reads:

- 09:45 – Executed reroutePower()

- 11:30 – Read data from nebula-273.json

- 14:00 – Used diplomatic-greeting prompt to contact Chancellor Vira

- 16:20 – Logged mission status: &apos;Diplomatic contact successful&apos;&quot;

“Log it, sign it, and let’s get some rest,” Alex says.

### Why This Matters

MCP `logging` lets your agent (or MCP server) record what happened. It logs what tools were used, what files or other resources were accessed, and which prompts were triggered.

This is critical for:

- Debugging

- Auditing

- Replayability

- Traceability

You can log to the console, a file, a database, or any external telemetry service.

_TypeScript Example_

```typescript
import { Server } from &quot;@modelcontextprotocol/sdk/server&quot;;
import { 
  ListToolsRequestSchema,
  CallToolRequestSchema 
} from &quot;@modelcontextprotocol/sdk/types&quot;;

const server = new Server(
  { 
    name: &quot;ship-captains-log&quot;, 
    version: &quot;1.0.0&quot; 
  },
  { 
    capabilities: { 
      tools: {},
      logging: {} // ✅ Must declare logging capability
    } 
  }
);

// Simple log helper
function log(level: string, logger: string, message: string) {
  server.sendLoggingMessage({
    level,
    logger,
    data: message
  });
}

// List available tools
server.setRequestHandler(ListToolsRequestSchema, async () =&gt; {
  log(&apos;debug&apos;, &apos;tools&apos;, &apos;Client requested tool list&apos;);
  
  return {
    tools: [
      {
        name: &quot;log_mission_status&quot;,
        description: &quot;Log the current mission status to the captain&apos;s log&quot;,
        inputSchema: {
          type: &quot;object&quot;,
          properties: {
            status: { type: &quot;string&quot;, description: &quot;Mission status update&quot; },
            location: { type: &quot;string&quot;, description: &quot;Current location&quot; }
          },
          required: [&quot;status&quot;]
        }
      }
    ]
  };
});

// Handle tool execution with logging
server.setRequestHandler(CallToolRequestSchema, async (request) =&gt; {
  const { name, arguments: args } = request.params;
  
  log(&apos;info&apos;, &apos;tools&apos;, `Tool called: ${name}`);
  
  try {
    if (name === &quot;log_mission_status&quot;) {
      const { status, location } = args as { status: string; location?: string };
      
      log(&apos;debug&apos;, &apos;tools&apos;, &apos;Processing mission status log&apos;);
      
      const timestamp = new Date().toISOString();
      const locationText = location ? ` at ${location}` : &quot;&quot;;
      const logEntry = `[${timestamp}] MISSION STATUS${locationText}: ${status}`;
      
      log(&apos;info&apos;, &apos;mission&apos;, `Mission status logged: ${status}`);
      
      return { 
        content: [{ 
          type: &quot;text&quot;, 
          text: `Mission status logged successfully:\n${logEntry}` 
        }] 
      };
    }
    
    throw new Error(`Unknown tool: ${name}`);
    
  } catch (error) {
    log(&apos;error&apos;, &apos;tools&apos;, `Tool failed: ${error.message}`);
    throw error;
  }
});
```

## Wrapping Up: Bringing Structure to Smart Agents

That’s it! Hopefully these simple stories and examples helped make MCP’s tools, resources, prompts, and logging a little more approachable. You’ve now seen how each capability plays a role in building intelligent, action-oriented agents that can leverage real-world context and control.

Let’s recap:

- Use **tools** when your AI needs to _perform_ an action.

- Use **resources** when your AI needs to _reference_ existing data.

- Use **prompts** to keep responses clean, consistent, and reusable.

- Use **logging** to capture what your MCP server does.

Together, these capabilities give your AI systems the structure they need to do real work in safe and predictable ways. Interested in learning more about MCP? Check out this free course:  
  
📖 [Model Context Protocol for Beginners](https://github.com/microsoft/mcp-for-beginners)

**Additional** **MCP Resources**

🔗 [Model Context Protocol Introduction](https://modelcontextprotocol.io/introduction)  
🛠️ [Using MCP servers in VS Code](https://code.visualstudio.com/docs/copilot/chat/mcp-servers)  
🤖 [MCP Servers for VS Code agent mode](https://code.visualstudio.com/mcp)  
🦸 [Marvel MCP Server Walkthrough](https://blog.codewithdan.com/integrating-ai-with-external-apis-building-a-marvel-mcp-server/)  
  
Got questions? Reach out on [social](https://x.com/danwahlin)!</content:encoded></item><item><title>AI Repo of the Week: Generative AI for Beginners with JavaScript</title><link>https://blog.codewithdan.com/ai-repo-of-the-week-generative-ai-for-beginners-with-javascript/</link><guid isPermaLink="true">https://blog.codewithdan.com/ai-repo-of-the-week-generative-ai-for-beginners-with-javascript/</guid><pubDate>Thu, 03 Jul 2025 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/ai-repo-of-the-week-generative-ai-for-beginners-with-javascript/image-1024x576.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image.png)

_A fun, hands-on learning journey that teaches JavaScript developers how to build AI-powered apps using generative AI and large language models._

## Introduction

Ready to explore the fascinating world of Generative AI using your JavaScript skills? This week’s featured repository, [**Generative AI for Beginners with JavaScript**](https://github.com/microsoft/generative-ai-with-javascript), is your launchpad into the future of application development. Whether you&apos;re just starting out or looking to expand your AI toolbox, this open-source GitHub resource offers a rich, hands-on journey. It includes interactive lessons, quizzes, and even time-travel storytelling featuring historical legends like Leonardo da Vinci and Ada Lovelace.

Each chapter combines narrative-driven learning with practical exercises, helping you understand foundational AI concepts and apply them directly in code. It’s immersive, educational, and genuinely fun.  

[![](/images/blog/ai-repo-of-the-week-generative-ai-for-beginners-with-javascript/rome.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/07/rome.png)

## What You&apos;ll Learn

#### 1\. Foundations of Generative AI and LLMs

Start with the basics: What is generative AI? How do large language models (LLMs) work? This chapter lays the groundwork for how these technologies are transforming JavaScript development.

#### 2\. Build Your First AI-Powered App

Walk through setting up your environment and creating your first AI app. Learn how to configure prompts and unlock the potential of AI in your own projects.

#### 3\. Prompt Engineering Essentials

Get hands-on with prompt engineering techniques that shape how AI models respond. Explore strategies for crafting prompts that are clear, targeted, and effective.

#### 4\. Structured Output with JSON

Learn how to guide the model to return structured data formats like JSON—critical for integrating AI into real-world applications.

#### 5\. Retrieval-Augmented Generation (RAG)

Go beyond static prompts by combining LLMs with external data sources. Discover how RAG lets your app pull in live, contextual information for more intelligent results.

#### 6\. Function Calling and Tool Use

Give your LLM new powers! Learn how to connect your own functions and tools to your app, enabling more dynamic and actionable AI interactions.

#### 7\. Model Context Protocol (MCP)

Dive into MCP, a new standard for organizing prompts, tools, and resources. Learn how it simplifies AI app development and fosters consistency across projects.

#### 8\. Enhancing MCP Clients with LLMs

Build on what you’ve learned by integrating LLMs directly into your MCP clients. See how to make them smarter, faster, and more helpful.

_More chapters coming soon—watch the [repo](https://github.com/microsoft/generative-ai-with-javascript) for updates!_

## Companion App: Interact with History

Experience the power of generative AI in action through the companion web app—where you can chat with historical figures and witness how JavaScript brings AI to life in real time.

[![](/images/blog/ai-repo-of-the-week-generative-ai-for-beginners-with-javascript/dinocrates-1024x755.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/07/dinocrates.png)

_Talk with Dinocrates and other historical characters using a custom AI app!_

## Conclusion

[**Generative AI for Beginners with JavaScript**](https://github.com/microsoft/generative-ai-with-javascript) is more than a course—it’s an adventure into how storytelling, coding, and AI can come together to create something fun and educational. Whether you&apos;re here to upskill, experiment, or build the next big thing, this repository is your all-in-one resource to get started with confidence.

**Jump into the future of development—[check out the repo](https://github.com/microsoft/generative-ai-with-javascript) and start building with AI today!**</content:encoded></item><item><title>AI Repo of the Week: MCP for Beginners</title><link>https://blog.codewithdan.com/ai-repo-of-the-week-mcp-for-beginners/</link><guid isPermaLink="true">https://blog.codewithdan.com/ai-repo-of-the-week-mcp-for-beginners/</guid><pubDate>Thu, 22 May 2025 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/ai-repo-of-the-week-mcp-for-beginners/image-5.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/05/image-5.png)

## **Introduction**

Welcome to this week&apos;s spotlight in our AI Repo of the Week series—introducing the &quot;MCP for Beginners&quot; course, a fantastic starting point for AI developers and enthusiasts eager to navigate the world of Model Context Protocol (MCP). This open-source GitHub repository offers an in-depth exploration of MCP, an interface designed to extend the capabilities of AI models by integrating them with external tools, APIs, and data sources. Whether you&apos;re new to MCP or looking to strengthen your understanding, this comprehensive resource is a perfect fit.

The repository provides practical lessons and examples that guide developers through creating MCP servers in multiple programming environments, including C#, Python, Java, and TypeScript. A primary goal is to demystify the process of building scalable AI applications by employing standardized protocols. Here&apos;s a quick overview of what you can expect from this repository.

## **Key Features &amp; Learning Journey**

**Getting Started with MCP**

The &quot;MCP for Beginners&quot; repository kicks off with an introduction to the Model Context Protocol, setting the stage by explaining why a standardized approach matters for scalable AI applications. Users can quickly get their hands dirty with step-by-step lessons designed to implement basic server functionality. This practical approach ensures that you not only learn the MCP theory but also see it in action. The repository is structured to facilitate a smooth learning curve, with each section building on the previous one. Here are a few of the key features you&apos;ll learn about:

- **Core Concepts Explained**: In-depth exploration of the core concepts of MCP, including client-server architecture, key protocol components, and messaging patterns.

- **First Server Setup**: Begin by creating your first MCP server. You&apos;ll explore how to utilize tools like the inspector to test and debug your setups.

- **Client Development**: Progress to building a client that can interact with the MCP server. Special attention is given to enhancing client capabilities using Large Language Models (LLMs).

- **Advanced Server Integration**: Learn to operate MCP servers in various environments, including consumption via GitHub Copilot Agent and Visual Studio Code.

Here&apos;s the complete MCP curriculum structure:

- **[Introduction to MCP](https://github.com/microsoft/mcp-for-beginners/blob/main/00-Introduction/README.md)**  
    Overview of the Model Context Protocol and its significance in AI pipelines, including what is the Model Context Protocol, why standardization matters and practical use cases and benefits.

- **[Core Concepts Explained](https://github.com/microsoft/mcp-for-beginners/blob/main/01-CoreConcepts/README.md)**  
    In-depth exploration of the core concepts of MCP, including client-server architecture, key protocol components, and messaging patterns.

- **[Security in MCP](https://github.com/microsoft/mcp-for-beginners/blob/main/02-Security/README.md)**  
    Identifying security threats within MCP-based systems, techniques and best practices for securing implementations.

- **[Getting Started with MCP](https://github.com/microsoft/mcp-for-beginners/blob/main/03-GettingStarted/README.md)**  
    Environment setup and configuration, creating basic MCP servers and clients, integrating MCP with existing applications.

- **[First server](https://github.com/microsoft/mcp-for-beginners/blob/main/03-GettingStarted/01-first-server/README.md)**  
    Setting up a basic server using the MCP protocol, understanding the server-client interaction, and testing the server.

- **[First client](https://github.com/microsoft/mcp-for-beginners/blob/main/03-GettingStarted/02-client/README.md)**  
    Setting up a basic client using the MCP protocol, understanding the client-server interaction, and testing the client.

- **[Client with LLM](https://github.com/microsoft/mcp-for-beginners/blob/main/03-GettingStarted/03-llm-client/README.md)**  
    Setting up a client using the MCP protocol with a Large Language Model (LLM).

- **[Consuming a server with Visual Studio Code](https://github.com/microsoft/mcp-for-beginners/blob/main/03-GettingStarted/04-vscode/README.md)**  
    Setting up Visual Studio Code to consume servers using the MCP protocol.

- **[Creating a server using SSE](https://github.com/microsoft/mcp-for-beginners/blob/main/03-GettingStarted/05-sse-server/README.md)**  
    SSE helps expose a server to the internet. This section will help you create a server using SSE.

- **[Use AI Toolkit](https://github.com/microsoft/mcp-for-beginners/blob/main/03-GettingStarted/06-aitk/README.md)**  
    AI Toolkit is a great tool that will help you manage your AI and MCP workflow.

- **[Testing your server](https://github.com/microsoft/mcp-for-beginners/blob/main/03-GettingStarted/07-testing/README.md)**  
    Testing is an important part of the development process. This section will help you test using several different tools.

- **[Deploy your server](https://github.com/microsoft/mcp-for-beginners/blob/main/03-GettingStarted/08-deployment/README.md)**  
    How do you go from local development to production? This section will help you develop and deploy your server.

- **[Practical Implementation](https://github.com/microsoft/mcp-for-beginners/blob/main/04-PracticalImplementation/README.md)**  
    Using SDKs across different languages, debugging, testing, and validation, crafting reusable prompt templates and workflows.

- **[Advanced Topics in MCP](https://github.com/microsoft/mcp-for-beginners/blob/main/05-AdvancedTopics/README.md)**  
    Multi-modal AI workflows and extensibility, secure scaling strategies, MCP in enterprise ecosystems.

- **[Community Contributions](https://github.com/microsoft/mcp-for-beginners/blob/main/06-CommunityContributions/README.md)**  
    How to contribute code and docs, collaborating via GitHub, community-driven enhancements and feedback.

- **[Insights from Early Adoption](https://github.com/microsoft/mcp-for-beginners/blob/main/07-LessonsFromEarlyAdoption/README.md)**  
    Real-world implementations and what worked, building and deploying MCP-based solutions, trends and future roadmap.

- **[Best Practices for MCP](https://github.com/microsoft/mcp-for-beginners/blob/main/08-BestPractices/README.md)**  
    Performance tuning and optimization, designing fault-tolerant MCP systems, testing and resilience strategies.

- **[MCP Case Studies](https://github.com/microsoft/mcp-for-beginners/blob/main/09-CaseStudy/README.md)**  
    Deep-dives into MCP solution architectures, deployment blueprints and integration tips, annotated diagrams and project walkthroughs.

## **Practical Implementation**

Throughout the curriculum, you’ll explore key MCP concepts alongside diverse, hands-on samples that demonstrate practical applications in multiple programming languages. For example, a simple calculator server illustrates how MCP principles can be applied to build API endpoints for operations like addition, subtraction, multiplication, and division, complete with type safety and error handling.

## **Advanced Concepts** and Case Study

In addition to covering MCP fundamentals, the course will also walk you through more advanced topics as well. This includes learning about multi-modal integrations, highlighting how AI models can expand beyond text-only capabilities to include data types like images and audio. You&apos;ll also learn about scalable MCP architectures, security, performance optimization, and more.

The course also walks you through an [Azure AI Travel Agents](https://github.com/Azure-Samples/azure-ai-travel-agents) case study that demonstrates using MCP in an AI agents scenario.

[![](/images/blog/ai-repo-of-the-week-mcp-for-beginners/image-6-1024x682.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/05/image-6-scaled.png)

## **Conclusion**

[MCP for Beginners](https://github.com/microsoft/mcp-for-beginners) is a great resource for anyone journeying into the realm of AI and MCP development. It provides developers with the necessary knowledge and tools to create flexible and scalable AI applications that can integrate with a variety of data sources.  
  
Dive into the repository, experiment with the provided examples and tools, and quickly get up-to-speed on what MCP is and how you can begin using it. Visit the [MCP for Beginners repository on GitHub](https://github.com/microsoft/mcp-for-beginners), star it, and help drive forward the mission of better, standardized AI integrations.

## Additional AI Courses[](https://github.com/microsoft/mcp-for-beginners/blob/main/README.md#-other-courses)

In addition to the MCP for Beginners course, check out the following courses as well:

- [AI Agents For Beginners](https://github.com/microsoft/ai-agents-for-beginners?WT.mc_id=academic-105485-koreyst)

- [Generative AI for Beginners using .NET](https://github.com/microsoft/Generative-AI-for-beginners-dotnet?WT.mc_id=academic-105485-koreyst)

- [Generative AI for Beginners using JavaScript](https://github.com/microsoft/generative-ai-with-javascript)

- [Generative AI for Beginners](https://github.com/microsoft/generative-ai-for-beginners?WT.mc_id=academic-105485-koreyst)

- [ML for Beginners](https://aka.ms/ml-beginners?WT.mc_id=academic-105485-koreyst)

- [Data Science for Beginners](https://aka.ms/datascience-beginners?WT.mc_id=academic-105485-koreyst)

- [AI for Beginners](https://aka.ms/ai-beginners?WT.mc_id=academic-105485-koreyst)

- [Mastering GitHub Copilot for AI Paired Programming](https://aka.ms/GitHubCopilotAI?WT.mc_id=academic-105485-koreyst)

- [Mastering GitHub Copilot for C#/.NET Developers](https://github.com/microsoft/mastering-github-copilot-for-dotnet-csharp-developers?WT.mc_id=academic-105485-koreyst)

- [Choose Your Own Copilot Adventure](https://github.com/microsoft/CopilotAdventures?WT.mc_id=academic-105485-koreyst)</content:encoded></item><item><title>AI Repo of the Week: GitHub Copilot Adventures</title><link>https://blog.codewithdan.com/ai-repo-of-the-week-github-copilot-adventures/</link><guid isPermaLink="true">https://blog.codewithdan.com/ai-repo-of-the-week-github-copilot-adventures/</guid><pubDate>Sat, 03 May 2025 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/ai-repo-of-the-week-github-copilot-adventures/image-1024x585.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/05/image.png)

Welcome to the AI Repo of the Week series! In this first post we&apos;re going to look at the [GitHub Copilot Adventures repo](https://github.com/microsoft/CopilotAdventures). Stay tuned for additional posts that cover AI-related repos across multiple languages and technologies.  

## GitHub Copilot Adventures

Kickstart your GitHub Copilot journey with [**GitHub** **Copilot Adventures**](https://github.com/microsoft/CopilotAdventures), a hands-on playground of “choose-your-own-adventure” coding challenges. In just a few minutes, you’ll go from predicting the next number in a sequence to simulating interstellar alignments, all guided by GitHub Copilot’s AI suggestions. Whether you’re brand new to Copilot or looking to level up your AI-paired programming skills, this repo offers step-by-step scenarios in multiple languages.

Each adventure is packaged as a story—complete with context, objectives, and high-level tasks—so you can dive straight into the code. Behind the scenes, a **Solutions** directory provides reference implementations in C#, JavaScript, and Python, just in case you get stuck and need a little assistance. By the end of your first run-through, you’ll have explored everything from console apps to HTTP calls, regex-powered text extraction, data structures, and even grid-based battle simulations—all with GitHub Copilot as your AI pair programmer.

**Note:** GitHub Copilot provides [Ask, Edit, and Agent modes](https://github.blog/ai-and-ml/copilot-ask-edit-and-agent-modes-what-they-do-and-when-to-use-them/). The adventures in this repo rely on what is referred to as &quot;Ask mode&quot; where you prompt the AI for code suggestions.

## Choose Your Own Adventure

Several &quot;adventures&quot; are provided in the repo from beginner to advanced. Brand new to GitHub Copilot? Start with the warm-up adventure which will introduce you to some of the core concepts you need to know to get started. From there, you can jump into any of the beginner, intermediate, or advanced adventures and create them using your chosen language. As mentioned, solutions are provided for C#, JavaScript, and Python in case you get stuck and need a little help.

**Warm-Up Adventure**

[![](/images/blog/ai-repo-of-the-week-github-copilot-adventures/image-1-1024x585.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/05/image-1.png)

Start here if need a quick introduction to GitHub Copilot, what it is, and how you can get started using it.

- **Chamber of Echoes**: Predict the next number in an arithmetic sequence.

**Beginner Adventures**  

[![](/images/blog/ai-repo-of-the-week-github-copilot-adventures/image-2-1024x585.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/05/image-2.png)

- **The Clockwork Town of Tempora**: Calculate clock drift in minutes.

- **The Magical Forest of Algora**: Simulate dance moves to create magical effects.

**Intermediate Adventures**

[![](/images/blog/ai-repo-of-the-week-github-copilot-adventures/image-3-1024x585.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/05/image-3.png)

- **The Celestial Alignment of Lumoria**: Determine planetary light intensity based on shadows.

- **The Legendary Duel of Stonevale**: Simulate a rock-paper-scissors duel with weighted scoring.

- **The Scrolls of Eldoria**: Fetch and filter secrets using regex from a remote text file.

**Advanced Adventures**

[![](/images/blog/ai-repo-of-the-week-github-copilot-adventures/image-4-1024x585.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/05/image-4.png)

- **The Gridlock Arena of Mythos**: Build a turn-based grid battle with overlapping moves and score tracking.

## Conclusion

By tackling each adventure—from a simple echo chamber to a full gridlock arena—you’ll not only learn core programming concepts across languages but also see firsthand how GitHub Copilot accelerates development, improves code quality, and surfaces best practices to increase your productivity.

**Take the next step!** Explore the full adventure library, run the code locally (or create your own GitHub Codespace to get started super fast), and unleash the power of AI-paired coding today. [Visit the Copilot Adventures repo on GitHub](https://github.com/microsoft/CopilotAdventures)!

```

```</content:encoded></item><item><title>Building a Marvel MCP Server with External APIs</title><link>https://blog.codewithdan.com/integrating-ai-with-external-apis-building-a-marvel-mcp-server/</link><guid isPermaLink="true">https://blog.codewithdan.com/integrating-ai-with-external-apis-building-a-marvel-mcp-server/</guid><pubDate>Tue, 01 Apr 2025 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/integrating-ai-with-external-apis-building-a-marvel-mcp-server/captain-america.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/captain-america.jpg)

_Image from the Marvel Developers API_  
  
**NOTE:** Marvel recently retired their API so it is no longer available unfortunately. I&apos;m leaving this post up for historical purposes since the approach is still relevant for MCP servers. If you&apos;d like to see a similar example, check out my [DC Comics MCP server.](https://github.com/DanWahlin/dc-comics-mcp)  
  
**Summary**

- MCP enables AI models to access external APIs through standardized tool calls, supercharging their capabilities with real-time data and actions.

- The [Marvel MCP server](https://github.com/DanWahlin/marvel-mcp) provides &quot;tools&quot; for interacting with the [Marvel API](https://developer.marvel.com), which is useful for fetching character and comic data and integrating it with AI systems. Even if you&apos;re not coding for the Avengers, it&apos;s a great way to learn MCP integration and create an MCP server for your own APIs.

## Introduction to Model Context Protocol (MCP)

Ever wish your AI assistant could tap into external APIs, fetch live data, or interact with tools beyond its built-in knowledge? That’s exactly what the Model Context Protocol (MCP) enables. MCP is a standardized way for AI systems to discover available tools, send them inputs, and get back results in a consistent format. It&apos;s a powerful way to extend what AI can do—especially for things like making API calls or querying databases, which models can’t do on their own. Think of it as the Rosetta Stone between AI models and the outside world.

To make this real (and fun), I put together a Marvel-themed MCP server. It acts as a bridge to the [Marvel API](https://developer.marvel.com), letting you pull in data about characters, comics, and more—perfect for giving AI apps some superhero flair.

Now, I realize that unless you’re working at Stark Industries, you probably won’t be using Marvel data in your day job. But hey, capes or not, this project gives you a practical blueprint for building your own API-connected MCP server that works with tools like [Claude Desktop](https://claude.ai/download) or [GitHub Copilot in VS Code](https://code.visualstudio.com).

## Project Overview

I originally built the [Marvel MCP server project](https://github.com/danwahlin/marvel-mcp) to experiment with integrating API data into AI systems. I needed some realistic APIs to use and the Marvel APIs worked really well. They&apos;re rich, fun to explore, and perfect for a demo that’s a little more exciting than pulling weather data.  
  
The project&apos;s structure includes several key files:

- **src/index.ts**: Sets up the MCP stdio server and handles requests for listing and calling tools.

- **src/tools/tools.ts**: Defines MCP tools used to interact with the Marvel API, each with a description, input schema, and handler function. Each tool is in its own folder and defines the schemas it needs and the overall functionality.

- **src/tools/schemas.ts**: Contains shared [Zod schemas](https://zod.dev/) used by tools that call the Marvel API. The schemas are generated from the Open API spec available from [https://gateway.marvel.com/docs/public](https://gateway.marvel.com/docs/public).

- **src/utils.ts**: Provides utility functions used by the MCP server tools to handle authentication (via public and private keys that the Marvel Developer API provides) and HTTP requests.

- **src/instructions.ts:** Provides general prompts used across all tools.

- **src/server.ts** and **src/streamable-http.ts:** The MCP server can be run as stdio (that&apos;s the default option to use it in an MCP host - more on this later). However, these two files enable running the new streamable HTTP option available.

Let&apos;s walk through several of these files and highlight key aspects that are used to create the MCP server.

## The Role of Schemas

APIs can have input parameters and return a specific type of response. MCP tools need to know what to pass to an API endpoint and the type of response to exact. That&apos;s where schemas come into play. Each tool folder such as **tools/get\_characters** has the schemas used by the tool and well as the functionality to call the target API endpoint. Schemas define input parameters passed from the tool to the API (filters, IDs, etc.) as well as the shape of the response (ID, name, description, etc.). Schemas rely on the [Zod npm package](https://zod.dev) to define an object&apos;s structure and data types.

Here&apos;s an example of how Zod is used to create **GetCharactersSchema** which is located in the **tools/get\_characters/schemas.ts** file.

##### CharactersSchema

```typescript
export const GetCharactersSchema = z.object({
  name: z.string().optional(),
  nameStartsWith: z.string().optional(),
  modifiedSince: z.string().optional(),
  comics: z.string().optional(),
  series: z.string().optional(),
  events: z.string().optional(),
  stories: z.string().optional(),
  orderBy: z.string().optional(),
  limit: z.number().min(1).max(100).optional(),
  offset: z.number().optional(),
});
```

This schema defines the input parameters used to fetch characters from the API:

- Optional filters like name, nameStartsWith, and modifiedSince for searching and date filtering.

- Optional comma-separated lists for comics, series, events, and stories to filter by related entities.

- orderBy for sorting results, and limit and offset for pagination, with limit constrained between 1 and 100.

In addition to input schemas, there are also output schemas as mentioned earlier. Here&apos;s an example of **CharacterSchema** which represents the shape of the data in the API response. Because it&apos;s used across multiple tools, it&apos;s defined in **tools/schemas.ts**.

##### CharacterSchema

```typescript
export const CharacterSchema = z.object({
  id: z.number(),
  name: z.string(),
  description: z.string(),
  modified: z.string(),
  resourceURI: z.string(),
  urls: z.array(UrlSchema),
  thumbnail: ImageSchema,
  comics: ListSchema(ComicSummarySchema),
  stories: ListSchema(StorySummarySchema),
  events: ListSchema(EventSummarySchema),
  series: ListSchema(SeriesSummarySchema),
});
```

This schema defines a Marvel character, including:

- **id**: A numeric identifier.

- **name**: The character&apos;s name as a string.

- **description**: A character description.

- **modified**: A string representing the last modification date.

- **resourceURI**: The canonical URL for the character.

- **urls**: An array of UrlSchema objects, likely for external links.

- **thumbnail**: An ImageSchema for the character&apos;s image.

- **comics, stories, events, series**: Lists with respective summary schemas.

This schema ensures that any character data received from an API call conforms to the schema structure, which helps an AI system better understand the data it receives.

Creating schemas by hand is really tedious, so I used [Grok 3](https://grok.com/chat) to convert portions of Marvel&apos;s [OpenAPI spec](https://gateway.marvel.com/docs/public) into the desired schemas. GitHub Copilot, ChatGPT, and others should be able to handle the conversion for you as well.

## The Role of Tools

One of the key features of MCP servers is [tools](https://modelcontextprotocol.io/docs/concepts/tools). Tools allow an LLM used within an AI system to perform actions through the MCP server. Think of tools as buttons the AI can press to go do something useful—like asking the Marvel API for data.

The **src/tools/tools.ts** file defines several tools for the Marvel MCP server, each with a description, schema, and handler. Here&apos;s an example of the **get\_characters** tool used to call the Marvel API.

##### get\_characters Tool

```typescript
export const get_characters = {
    description: `Fetch Marvel characters with optional filters.`,
    schema: GetCharactersSchema,
    handler: async (args: any) =&gt; {
        const argsParsed = GetCharactersSchema.parse(args);
        const res = await httpRequest(&apos;/characters&apos;, serializeQueryParams(argsParsed));
        return CharacterDataWrapperSchema.parse(res);
    }
};
```

- **Description**: Fetches Marvel characters, allowing optional filters like name, modification date, and related entities.

- **Schema**: Uses GetCharactersSchema, ensuring inputs are validated against expected parameters.

- **Handler**: Parses the input using GetCharactersSchema, serializes query parameters using serializeQueryParams from utils.ts, makes an HTTP request to Marvel&apos;s /characters API using httpRequest (also in utils.ts), and parses the response with CharacterDataWrapperSchema. This ensures the response conforms to the expected character data structure and helps the AI system understand the data it receives.

Another tool is _get\_comics\_for\_character_:

```typescript
export const get_comics_for_character = {
    description: `Fetch Marvel comics filtered by character ID and optional filters.`,
    schema: GetComicsForCharacterSchema,
    handler: async (args: any) =&gt; {
        const { characterId, ...rest } = GetComicsForCharacterSchema.parse(args);
        const res = await httpRequest(`/characters/${characterId}/comics`, serializeQueryParams(rest));
        return ComicDataWrapperSchema.parse(res);
    }
};
```

- **Description**: Fetches comics featuring a specific character, with additional optional filters like format, date range, and more.

- **Schema**: Uses GetComicsForCharacterSchema, ensuring input validation.

- **Handler**: Parses the input, extracts characterId and other parameters, makes a request to /characters/{characterId}/comics with serialized query parameters, and parses the response with ComicDataWrapperSchema, ensuring comic data integrity.

The **src/tools/tools.ts** file aggregates all of the available tools:

```typescript
import { get_characters } from &apos;./get_characters/index.js&apos;;
import { get_character_by_id } from &apos;./get_character_by_id/index.js&apos;;
import { get_comics_for_character } from &apos;./get_comics_for_character/index.js&apos;;
import { get_comics } from &apos;./get_comics/index.js&apos;;
import { get_comic_by_id } from &apos;./get_comic_by_id/index.js&apos;;
import { get_characters_for_comic } from &apos;./get_characters_for_comic/index.js&apos;;

export const marvelTools = {
    get_characters,
    get_character_by_id,
    get_comics_for_character,
    get_comics,
    get_comic_by_id,
    get_characters_for_comic
};

export type ToolName = keyof typeof marvelTools;
```

These tools demonstrate how the server abstracts Marvel API calls, providing a standardized interface for MCP interactions. The input schemas help the AI system understand what data it needs to pass and the output schemas help the AI system&apos;s LLM understand the data it recevied.

## Utility Functions

Looking at the previous tools, you may have noticed that an **httpRequest()** function handles the calls to the Marvel API endpoints. It&apos;s located in **src/utils.ts**. Here&apos;s what the function looks like.

##### httpRequest()

```typescript
export async function httpRequest(endpoint: string, params: Record&lt;string, string | number | undefined&gt; = {}) {
  const url = new URL(`${MARVEL_API_BASE}${endpoint}`);

  const authParams = createAuthParams();
  url.searchParams.set(&apos;ts&apos;, authParams.ts);
  url.searchParams.set(&apos;apikey&apos;, authParams.apikey);
  url.searchParams.set(&apos;hash&apos;, authParams.hash);

  for (const [key, value] of Object.entries(params)) {
    if (value !== undefined) {
      url.searchParams.set(key, String(value));
    }
  }

  const res = await fetch(url.toString());
  if (!res.ok) {
    const text = await res.text();
    throw new Error(`Marvel API error: ${res.status} - ${text}`);
  }

  return res.json();
}
```

- **Purpose**: Makes HTTP requests to the Marvel API, handling authentication with timestamp, API key, and hash, and ensuring error handling for non-OK responses.

- **Usage**: Called by tool handlers to fetch data, ensuring secure and reliable API interactions.

## The MCP Server

MCP servers can support a variety of transport types including stdio, SSE, and HTTP streaming (the newest option). Think of your MCP server quietly listening like Jarvis in the background, ready to take orders via stdin and reply via stdout as an AI system needs more information. The Marvel MCP server uses _stdio_ by default since it&apos;s currently supported by most MCP hosts such as Claude Desktop and GitHub Copilot in VS Code. However, the project also supports the [streamable HTTP transport](https://modelcontextprotocol.io/specification/draft/basic/transports#streamable-http) as mentioned earlier.  
  
An MCP server running with `stdio` acts like a background process that listens for JSON-formatted tool calls via standard input, processes the requests (like calling an API or performing a task), and then returns the results via standard output.  
  
Here&apos;s some of the key code in the Marvel MCP server&apos;s **src/index.ts** file including how the server is initialized, how tools are exposed to MCP clients, and how individual tools are invoked.

##### Initializing the MCP Server

```typescript
import { Server } from &apos;@modelcontextprotocol/sdk/server/index.js&apos;;
import { StdioServerTransport } from &apos;@modelcontextprotocol/sdk/server/stdio.js&apos;;
import { CallToolRequestSchema, ListToolsRequestSchema } from &apos;@modelcontextprotocol/sdk/types.js&apos;;
import { zodToJsonSchema } from &apos;zod-to-json-schema&apos;;
import { marvelTools, ToolName } from &apos;./tools.js&apos;;

...

const server = new Server(
  {
    name: &apos;marvel-mcp&apos;,
    version: &apos;1.5.0&apos;,
    description: &apos;An MCP Server to retrieve Marvel character information.&apos;,
  },
  {
    capabilities: {
      tools: {},
    },
  }
);
```

- Initializes the MCP server with metadata, including name, version, and description, and specifies that the server supports tools.

##### List Tools Request Handler

```typescript
server.setRequestHandler(ListToolsRequestSchema, async () =&gt; {
  return {
    tools: Object.entries(marvelTools).map(([name, tool]) =&gt; ({
      name,
      description: tool.description,
      inputSchema: zodToJsonSchema(tool.schema),
    })),
  };
});
```

- Handles requests to list tools, mapping each tool from **marvelTools** (shown earlier) to include name, description, and input schema.

##### Call Tool Request Handler

```typescript
server.setRequestHandler(CallToolRequestSchema, async (request) =&gt; {
  console.error(`Processing tool request: ${request.params.name}`);

  if (!request.params.arguments) {
    throw new Error(&apos;Arguments are required&apos;);
  }

  const { name, arguments: args } = request.params;

  if (!(name in marvelTools)) {
    throw new Error(`Unknown tool: ${name}`);
  }

  const tool = marvelTools[name as ToolName];

  if (!tool) {
    throw new Error(`Tool not found: ${name}`);
  }

  try {
    const result = await tool.handler(args);
    console.error(`Completed tool request: ${name}`);

    return {
      content: [{ type: &apos;text&apos;, text: JSON.stringify(result) }],
    };
  } catch (error) {
    if (error instanceof Error) {
      throw new Error(`Error processing ${name}: ${error.message}`);
    }
    throw error;
  }
});
```

- Handles requests by an AI system to call a specific tool, validating the input, checking if the tool exists, executing the handler, and returning the result as JSON.

This setup ensures the server can respond to MCP protocol requests, providing a robust interface for tool interactions.

## Using the MCP Server in an MCP Host

Now that you’ve got your MCP server built, how do you connect it to something like Claude or Copilot? It’s easier than you might think. Just a bit of configuration and you’re off to the races.

For example, to use the Marvel MCP server with **Claude Desktop** you can add the following JSON into the **claude\_desktop\_config.json** file.

##### Claude Desktop MCP Server Configuration

```json
{
  &quot;mcpServers&quot;: {
    &quot;marvel-mcp&quot;: {
      &quot;type&quot;: &quot;stdio&quot;,
      &quot;command&quot;: &quot;npx&quot;,
      &quot;args&quot;: [
        &quot;-y&quot;,
        &quot;@codewithdan/marvel-mcp&quot;
      ],
      &quot;env&quot;: {
        &quot;MARVEL_PUBLIC_KEY&quot;: &quot;YOUR_PUBLIC_KEY&quot;,
        &quot;MARVEL_PRIVATE_KEY&quot;: &quot;YOUR_PRIVATE_KEY&quot;,
        &quot;MARVEL_API_BASE&quot;: &quot;https://gateway.marvel.com/v1/public&quot;
      }
    }
  }
}
```

To use the server with **GitHub Copilot in VS Code** you can add the following section to your settings:

**VS Code MCP Server Configuration**

```json
&quot;mcp&quot;: {
 &quot;servers&quot;: {
     &quot;marvel-mcp&quot;: {
         &quot;command&quot;: &quot;npx&quot;,
         &quot;args&quot;: [
             &quot;-y&quot;,
             &quot;@codewithdan/marvel-mcp&quot;
         ],
         &quot;env&quot;: {
             &quot;MARVEL_PUBLIC_KEY&quot;: &quot;YOUR_PUBLIC_KEY&quot;,
             &quot;MARVEL_PRIVATE_KEY&quot;: &quot;YOUR_PRIVATE_KEY&quot;,
             &quot;MARVEL_API_BASE&quot;: &quot;https://gateway.marvel.com/v1/public&quot;
         }
     },
 }
}
```

You&apos;ll notice that two keys are needed. You can get those from the [https://developer.marvel.com](https://developer.marvel.com) website after registering.

What if you don&apos;t want to store keys directly in the file? With VS Code you can use the following confirmation. When you start the MCP server you&apos;ll be prompted to enter the key and it&apos;ll store it for you (but not show it). Here&apos;s what that configuration looks like:  
  
**VS Code MCP Server Configuration** **with Inputs**

```json
&quot;mcp&quot;: {
  &quot;inputs&quot;: [
      {
          &quot;type&quot;: &quot;promptString&quot;,
          &quot;id&quot;: &quot;marvel-public-api-key&quot;,
          &quot;description&quot;: &quot;Marvel public API Key&quot;,
          &quot;password&quot;: true
      },
      {
          &quot;type&quot;: &quot;promptString&quot;,
          &quot;id&quot;: &quot;marvel-private-api-key&quot;,
          &quot;description&quot;: &quot;Marvel private API Key&quot;,
          &quot;password&quot;: true
      }
  ],
  &quot;servers&quot;: {
    &quot;marvel-mcp&quot;: {
        &quot;command&quot;: &quot;npx&quot;,
        // &quot;command&quot;: &quot;node&quot;,
        &quot;args&quot;: [
            &quot;-y&quot;,
            &quot;@codewithdan/marvel-mcp&quot;
            // &quot;/PATH/TO/marvel-mcp/dist/index.js&quot;
        ],
        &quot;env&quot;: {
            &quot;MARVEL_PUBLIC_KEY&quot;: &quot;${input:marvel-public-api-key}&quot;,
            &quot;MARVEL_PRIVATE_KEY&quot;: &quot;${input:marvel-private-api-key}&quot;,
            &quot;MARVEL_API_BASE&quot;: &quot;https://gateway.marvel.com/v1/public&quot;
        }
    }
  }
}
```

Once the configuration is in place and the MCP server is started by selecting the Start option above &quot;marvel-mcp&quot; in the file, the **@codewithdan/marvel-mcp** package will be downloaded from npm and the server will be run locally. From there, a user can interact with the MCP host and ask questions about Marvel characters and comics. Here&apos;s an example of doing that from GitHub Copilot.

[![Example of a prompt in GitHub Copilot in VS Code that triggers a Marvel MCP server tool.](/images/blog/integrating-ai-with-external-apis-building-a-marvel-mcp-server/2025-03-30_00-57-50.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/04/2025-03-30_00-57-50.png)

Examples of additional prompts could include:

&gt; What comics is Wolverine in?
&gt; 
&gt; Which characters appear in the Avengers comics?
&gt; 
&gt; What characters are in the Hedge Knight II: Sworn Sword (2007) comic?

Once the MCP server&apos;s tools are known to the MCP host, the AI system should call them anytime it needs an answer that it can&apos;t provide on its own.

## Conclusion

And there you have it—your very own MCP server powered by the Marvel universe! While you might not be saving the world with superhero data in your day job, this project gives you a solid blueprint for integrating _any_ external API into an AI system using the Model Context Protocol.

Peek under the hood at the code, schemas, tools, and server setup, and you’ll be well on your way to building your own API-powered AI sidekicks. Whether it’s fetching weather data, querying a knowledge base, or interfacing with your company’s internal tools, the process is the same.

#### Key Resources

- [Marvel MCP Server Project on GitHub](https://github.com/DanWahlin/marvel-mcp)

- [Marvel Developer API Documentation](https://developer.marvel.com/documentation/getting_started)

- [Model Context Protocol Overview](https://modelcontextprotocol.org)

- [Zod Validation Library](https://zod.dev)

- [Open API Specification](https://www.openapis.org)</content:encoded></item><item><title>Using RealTime AI - Part 1: Getting Started with the Fundamentals of Low-Latency AI Magic</title><link>https://blog.codewithdan.com/using-realtime-ai-part-1-getting-started-with-the-fundamentals-of-low-latency-ai-magic/</link><guid isPermaLink="true">https://blog.codewithdan.com/using-realtime-ai-part-1-getting-started-with-the-fundamentals-of-low-latency-ai-magic/</guid><pubDate>Fri, 14 Mar 2025 00:00:00 GMT</pubDate><content:encoded>Have you ever wished your AI could keep up with you—like, actually match your pace? You know, the kind of speed where you toss out a question and get a snappy reply before you’ve even blinked twice? Enter **Realtime AI**—a total game-changer that I&apos;ll have to admit had me grinning like I had just unlocked a secret superpower the first time I got it running.

In this first installment of the RealTime AI series, I’ll break down what Realtime AI is for you, why it’s awesome, and provide you with a first look at the [RealTime AI App](https://github.com/DanWahlin/RealtimeAIApp-JS)—a fun demo app that brings this tech to life. Let’s jump in!

## What’s RealTime AI All About?

Imagine traditional AI as that friend who takes awhile to text back—you send a message, twiddle your thumbs, and hope they reply before you’ve lost interest. RealTime AI, though? It’s like a live call—immediate, fluid, and right there with you. Powered by the [**OpenAI Realtime API** and model](https://platform.openai.com/docs/guides/realtime), it’s designed to deliver low-latency, multimodal magic, processing voice and text inputs in milliseconds for conversations that feel as natural as chatting with a friend.

The secret? It’s built on models like _gpt-4o-realtime_ optimized for real-time action. The realtime models handle everything from voice activation detection to audio streaming, and even throw in function calling support to let your AI take action—like pulling up customer info, formatting the response a specific way, or placing an order mid-chat. It’s a one-stop shop for building seamless, expressive experiences, without resorting to multiple calls to different AI models. What&apos;s really great about it is its support for audio or text inputs from the user.

## Why It’s a Big Deal

Have you ever tried cobbling together a voice assistant with separate speech recognition, text processing, and text-to-speech models? It can be challenging and enough to make you want to pull your hair out. The RealTime API flips that script. It streams audio inputs and outputs directly, handles interruptions like a seasoned conversationalist (think ChatGPT’s advanced voice mode), and does it all with a single API call.

An app with RealTime AI support can:

- Teach a user a language and even check their pronounciation.

- Allow a user to speak to a form and have it filled out automatically.

- Provide a user information about employee benefits as they talk with the RealTime AI assistant.

- Allow a customer to place an order using their voice, check an order&apos;s status, and more.

- Many more scenarios...

## RealTime AI App in Action

The **[RealTime AI App](https://github.com/DanWahlin/RealtimeAIApp-JS)** is a web-based project that I wrote using Angular on the front-end and Node.js on the back-end. It really shows off what this tech can do and provides two main features.

- **Language Coach**: Speak a phrase like “Hola, ¿cómo te llamas?” and it’ll chime in with, “Nice, but emphasize the ‘c’ more in &apos;cómo&apos;!” It’s your patient and kind language and pronunciation tutor.

[![](/images/blog/using-realtime-ai-part-1-getting-started-with-the-fundamentals-of-low-latency-ai-magic/image-9-1024x847.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-9.png)

- **Medical Form Assistant**: Say “Patient John Smith, 42 years old, history of pneumonia” and it returns a JSON object like { &quot;name&quot;: &quot;John Smith&quot;, &quot;age&quot;: &quot;42&quot;, &quot;notes&quot;: &quot;pneumonia&quot; } and fills in a form for you. Medical assistants, nurses, and doctors can speak directly to a form (no keyboard required) as they&apos;re busily hurrying around a hospital environment and have the form automatically filled in with patient details.

[![](/images/blog/using-realtime-ai-part-1-getting-started-with-the-fundamentals-of-low-latency-ai-magic/image-10-1024x916.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-10.png)

Since there are several parts to the RealTime AI App, I&apos;ll break it down into individual pieces for you through a series of posts that follow. In the meantime, here&apos;s a high-level overview of the key parts of the app.

- **Client**: This is you—the user interacting with the app via your browser. It sends audio or text inputs (like saying “Hello” or typing a question) to kick things off. It&apos;s written using Angular.

- **RealTime Session**: The Node.js code where the main action takes place - it manages the flow. It uses a client WebSocket to receive your inputs and send back responses, while a RealTime AI WebSocket connects to the OpenAI API. The logic block processes messages, ensuring everything runs smoothly between the client and the AI.

- **OpenAI RealTime API**: This is the brains of the operation. It receives audio/text from the Realtime Session, processes it with the gpt-4o-realtime model, and sends back audio/text responses. The app supports calling OpenAI or Azure OpenAI.

[![RealTime AI App diagram showing the client, realtime session, and OpenAI realtime interaction.](/images/blog/using-realtime-ai-part-1-getting-started-with-the-fundamentals-of-low-latency-ai-magic/image-8-559x1024.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-8.png)

## What’s Next?

This is just the start! In **Part 2**, I’ll dive into the server-side details—Node.js, WebSockets, and some code to tie it all together. You&apos;ll also see how prompts play a key role in determining the type of responses returned to the user. Stay tuned!

**Found this helpful?** Share it with your crew and follow me for the next installment:

- Twitter: [@danwahlin](https://twitter.com/danwahlin)

- LinkedIn: [Dan Wahlin](https://www.linkedin.com/in/danwahlin)</content:encoded></item><item><title>AI Time-Travel: Your JavaScript Quest Begins!</title><link>https://blog.codewithdan.com/ai-time-travel-your-javascript-quest-begins/</link><guid isPermaLink="true">https://blog.codewithdan.com/ai-time-travel-your-javascript-quest-begins/</guid><pubDate>Wed, 12 Mar 2025 00:00:00 GMT</pubDate><content:encoded>&gt; Generative AI for Beginners - A JavaScript Adventure. Code with Legends, Conquer the Future!

[![](/images/blog/ai-time-travel-your-javascript-quest-begins/image-1024x576.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image.png)

Imagine sipping espresso with Leonardo da Vinci, debating AI over sketches of flying machines. Or quizzing Ada Lovelace on code in her misty mansion. The [_Generative AI for Beginners - A JavaScript Adventure_ course](https://github.com/microsoft/generative-ai-with-javascript) (free on GitHub!) blends time-traveling thrills with hands-on JavaScript AI concepts. No time machine required—just curiosity and a keyboard!  
  
Generative AI is the secret sauce behind next-level chatbots, content wizards, and modern apps. For JavaScript devs, it’s your ticket to crafting smarter and more productive apps for users. This course? It’s your launchpad—no AI PhD needed!

## The Epic Lineup

Five chapters, one wild and fun ride:

**Chapter 1:** Decode AI basics with Leonardo’s Time Beetle.

[![](/images/blog/ai-time-travel-your-javascript-quest-begins/image-3.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-3.png)

In 1860s London, you get a letter from Charles Babbage. A mysterious device sends you to 300 BC Alexandria, where AI helps build a lighthouse! Ready to code with history?  

**Chapter 2:** Build your first AI app in a Renaissance workshop.

[![](/images/blog/ai-time-travel-your-javascript-quest-begins/image-4.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-4.png)

Lost in time with AI &amp; JavaScript! After helping build the Lighthouse of Alexandria, you wake up in 15th-century Florence. Leonardo da Vinci needs your help designing an AI-powered assistant! Ready to build?  

**Chapter 3:** Master prompts while dodging Roman spears.

[![](/images/blog/ai-time-travel-your-javascript-quest-begins/image-5.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-5.png)

  
Stuck in the past? Use AI to shape the future! After a chaotic escape with Leonardo da Vinci, you&apos;re airborne over ancient Rome—now, prompt engineering is your key to landing safely. Learn how to craft better AI prompts &amp; control outputs!  

**Chapter 4:** Tame AI outputs with Aztec flair.  
  

[![](/images/blog/ai-time-travel-your-javascript-quest-begins/image-6.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-6.png)

From chaos to clarity with AI! After a wild escape through time, you land in the Aztec Empire—but now, structured output is your key to winning a game of Patolli with Montezuma. Learn how to format AI responses for better results with JSON, tables &amp; more!  

**Chapter 5:** Supercharge smarts with Ada’s RAG (Retrieveal-Augmented Generation) magic.

[![](/images/blog/ai-time-travel-your-javascript-quest-begins/image-7.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-7.png)

Supercharge AI with your own data! After a foggy landing at Ada Lovelace’s mansion, she unveils RAG (Retrieval-Augmented Generation)—a way to make AI smarter by pulling in real-time data. Ready to upgrade your AI?  
  
**Chapter 6:** Stay tuned! More chapters coming soon!

## Time-Travel Vibes

[![](/images/blog/ai-time-travel-your-javascript-quest-begins/image-1-1024x755.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-1.png)

- **Chat with Historical Legends:** Use the provided companion app to chat with history&apos;s legends.

- **Real-world Skills:** Learn real-world AI skills to boost your knowledge and help you integrate Generative AI technologies into your apps.

- **GitHub Glory:** Fork it, star it, own it!

## **Ready to Time-Travel Through History?**

[![](/images/blog/ai-time-travel-your-javascript-quest-begins/image-2.webp)](https://blog.codewithdan.com/wp-content/uploads/2025/03/image-2.png)

Time travel, AI, &amp; JavaScript—what a ride! From Da Vinci’s workshop to the Aztec Empire and Ada Lovelace’s mansion, you’ll unlock the power of Generative AI, RAG, and structured output. Ready to build your own AI adventure? Get started today!  
  
[https://github.com/microsoft/generative-ai-with-javascript](https://github.com/microsoft/generative-ai-with-javascript)</content:encoded></item><item><title>What are Some Good Generative AI Prompt Engineering Resources?</title><link>https://blog.codewithdan.com/what-are-some-good-generative-ai-prompt-engineering-resources/</link><guid isPermaLink="true">https://blog.codewithdan.com/what-are-some-good-generative-ai-prompt-engineering-resources/</guid><pubDate>Mon, 18 Dec 2023 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/what-are-some-good-generative-ai-prompt-engineering-resources/image-2-1024x585.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/12/image-2.png)

One AI question I get a lot is, &quot;What are some good prompt engineering resources?&quot;. Here are a few I&apos;ve found useful:

✅ [OpenAI Prompt Engineering Documentation](https://platform.openai.com/docs/guides/prompt-engineering)  
✅ [Azure OpenAI: Introduction to Prompt Engineering](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/prompt-engineering)  
✅ [Prompt Engineering Guide](https://promptingguide.ai/)  
✅ [AI Builder Prompt Engineering Guide (PDF)](https://go.microsoft.com/fwlink/?linkid=2255775)  
✅ [DeepLearning.AI: ChatGPT Prompt Engineering for Developers](https://learn.deeplearning.ai/chatgpt-prompt-eng/lesson/1/introduction)  
✅ [Brex&apos;s Prompt Engineering Guide](https://github.com/brexhq/prompt-engineering)

There are many more out there, but these will help get you started. If you have a favorite, please add a comment so others can see it too.

If you want to start using prompts in different Generative AI scenarios and learn more about AI building blocks in general, check out:

✅ [OpenAI Cookbook](https://cookbook.openai.com/) (examples and guides)  
✅ [Generative AI for Beginners](https://github.com/microsoft/generative-ai-for-beginners) (hands-on tutorial)  
✅ [Integrate OpenAI, Communication, and Organizational Data Features into a Line of Business App](https://aka.ms/openai-acs-msgraph) (hands-on tutorial)  
✅ [Fundamentals of Generative AI](https://learn.microsoft.com/training/modules/fundamentals-generative-ai/) (hands-on tutorial)

Finally, prompts can be used with various Copilot scenarios too (GitHub Copilot, Microsoft Copilot, Copilot for Microsoft 365, and many more). If you want to learn about Copilot options and even go through some fun coding &quot;adventures&quot;, check out these resources:

[![](/images/blog/what-are-some-good-generative-ai-prompt-engineering-resources/image-1-1024x585.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/12/image-1.png)

✅ [Adopt, Extend and Build Copilot Experiences Across the Microsoft Cloud](https://aka.ms/MicrosoftCopilots) (roadmap for key copilots across Microsoft)  
✅ [Copilot Adventures](https://github.com/microsoft/CopilotAdventures) (coding challenges to help you learn how GitHub Copilot can help)  
✅ [Mastering GitHub Copilot for AI Paired Programming](https://github.com/microsoft/Mastering-GitHub-Copilot-for-Paired-Programming) (hands-on tutorial)</content:encoded></item><item><title>Video: Taking Your Line of Business Apps to the Next Level with OpenAI and GPT Models, Communication and Organizational Data</title><link>https://blog.codewithdan.com/video-taking-your-line-of-business-apps-to-the-next-level-openai-and-gpt-models-communication-and-organizational-data/</link><guid isPermaLink="true">https://blog.codewithdan.com/video-taking-your-line-of-business-apps-to-the-next-level-openai-and-gpt-models-communication-and-organizational-data/</guid><pubDate>Wed, 16 Aug 2023 00:00:00 GMT</pubDate><content:encoded>I had the incredible opportunity to grace the stage at the 2023 ng-conf conference in beautiful Salt Lake City, Utah this past June. Every year, I look forward to this event, and it’s not just because of the scenic views or the pristine organization of the conference. The magic truly lies in the amazing people I get to chat with and the new friendships I forge. If you&apos;re after a blend of fun, entertainment, and education, ng-conf is the place to be. Plus, let&apos;s not forget the golden opportunity it provides to expand your network.

This year, I talked about a topic I&apos;m really passionate about - taking Line of Business (LOB) apps to the next level. By weaving in technologies from OpenAI, incorporating advanced communication features, and integrating organizational data, you can transform the way users interact with your apps. Think less distraction, more action. It&apos;s all about minimizing those pesky context shifts and ensuring that the data users need is right at their fingertips. The end goal? Increased productivity!

For those who missed it (or want to relive the moment - yes, there were a few unexpected moments of &quot;fun&quot; during the talk), I’ve got you covered. Watch the full talk here: 

**Thinking Outside the Box: Taking Your LOB Apps to the Next Level**  
  
Technologies covered include OpenAI &amp; GPT Models, Communication with Azure Communication Services, and retrieving Organizational Data with Microsoft Graph.

https://www.youtube.com/watch?v=TZnMTICby5E

The GitHub repo used for the talk can be found at:  
  
[https://github.com/DanWahlin/openai-acs-msgraph](https://github.com/DanWahlin/openai-acs-msgraph)

A hands-on tutorial based on the application can be found at:  
  
[https://aka.ms/openai-acs-msgraph](https://aka.ms/openai-acs-msgraph)  
  
If you&apos;d like additional details about the AI, Communication, and Organizational Data features covered, check out the [tutorial](https://aka.ms/openai-acs-msgraph) or the [this blog post](https://blog.codewithdan.com/integrate-openai-communication-and-organizational-data-features-into-your-apps/).

Found this information useful? Please share it with others and follow me to get updates:

- **Twitter** - [https://twitter.com/danwahlin](https://twitter.com/danwahlin) 

- **LinkedIn** - [https://www.linkedin.com/in/danwahlin](https://www.linkedin.com/in/danwahlin)</content:encoded></item><item><title>The ABCs of AI Transformers, Tokens, and Embeddings: A LEGO Story</title><link>https://blog.codewithdan.com/the-abcs-of-ai-transformers-tokens-and-embeddings-a-lego-story/</link><guid isPermaLink="true">https://blog.codewithdan.com/the-abcs-of-ai-transformers-tokens-and-embeddings-a-lego-story/</guid><pubDate>Wed, 09 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

AI transformers have rapidly become one of the most popular and effective architectures in natural language processing and artificial intelligence. But what exactly are transformers, and how do they leverage embeddings to achieve state-of-the-art results on tasks like translation and text generation?

In this post, I&apos;ll attempt to demystify tokens, embedding, and transformers by unveiling the magic behind their near-human linguistic abilities using a simple analogy - language is like LEGOs! While the overall goal is to introduce you to the key concepts, you&apos;ll find additional links at the bottom of the post that will allow you to dive deeper.

Let&apos;s get started by talking about the role of tokens.

## Tokens: The Building Blocks of Language

Before diving into transformers, let’s talk about a key aspect of AI and transfomers: the token. You can think of sentences and words as molecules, whereas tokens are the atoms that make them up. Just like molecules are built from atoms, sentences are built from smaller token units.

[![](/images/blog/the-abcs-of-ai-transformers-tokens-and-embeddings-a-lego-story/image-17-1024x852.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-17.png)

Ever thought of language as a complex LEGO masterpiece? Imagine words, sentences, and paragraphs as an intricate LEGO creation composed of many tokens.

[![](/images/blog/the-abcs-of-ai-transformers-tokens-and-embeddings-a-lego-story/image-18.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-18.png)

While not as robust as the previous image, the following sentence will be converted into 5 tokens (or LEGO bricks if you&apos;d like to think of it that way):

&gt; She ate the pizza.

[![](/images/blog/the-abcs-of-ai-transformers-tokens-and-embeddings-a-lego-story/ai-tokens-she-ate-pizza.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/ai-tokens-she-ate-pizza.png)

Note: You can use [OpenAI&apos;s online tokenizer](https://platform.openai.com/tokenizer) tool to see how words are converted into tokens. Additional information about tokens can be found in their [documentation](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them).

Here&apos;s another example of how words map to tokens:

[![](/images/blog/the-abcs-of-ai-transformers-tokens-and-embeddings-a-lego-story/2023-08-10_16-38-34-1024x857.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/2023-08-10_16-38-34.png)

By converting language into tokens, AI transformers can build meaning from language. But, there&apos;s more to the token story.

## **Embeddings: A Token&apos;s Identity**

Tokens play a vital role in language processing, but they require a universal descriptor. This is where [embeddings](https://platform.openai.com/docs/guides/embeddings) enter the scene. Tokens are like the basic LEGO bricks - they provide the fundamental building blocks. But on their own, the transformer has no way to distinguish between bricks.

Embeddings are like the stickers, patterns, or colors added to the bricks to give them unique identities. Each blank brick now has decorative markings that set it apart.

For example, the token for &quot;ate&quot; may be decorated with a green and white sticker marker, while the token for &quot;pizza&quot; has vivid red and white stripes. The embeddings add symbolic meaning to the initially generic tokens. Now the transformer can easily identify each distinct &quot;token-brick&quot; by its decorative pattern or colors, similar to how we can differentiate LEGO pieces by appearance. These embedding &quot;markings&quot; allow the model to represent the unique meaning of each token.

Imagine you&apos;re building a LEGO model with a friend. Instead of asking, &quot;Can you hand me the brick that says &apos;She&apos; on it?&quot;, it&apos;s more intuitive to request, &quot;Can you hand me the blue circle brick?&quot;. In human terms, both questions might lead to the correct brick, but for machines, a more universal identifier, like the shape and color, simplifies the process.

Take the sentence &quot;She ate the pizza&quot;. This can be visualized using our LEGO analogy:

- \[She\]: \[Blue, Circle\]

- \[ate\]: \[Green, Rectangle\]

- \[the\]: \[Yellow, Square\]

- \[pizza\]: \[Red, Triangle\]

This works well from a human perspective, but how do machines interpret discrete words? Machines rely on numbers, not words. This brings us back to embeddings – the secret sauce behind Large Language Models (LLMs).

Machines would convert the above tokens into numerical vectors:

- \[She\]: \[0.1, 0.3, 0.5\]

- \[ate\]: \[0.7, 0.2, 0.1\]

- \[the\]: \[0.3, 0.1, 0.9\]

- \[pizza\]: \[0.2, 0.6, 0.1\]

These vectors encapsulate the essence of each word, making it digestible for AI transformers. Some embeddings even capture the order of words or differentiate content types, such as questions from answers. In practice, an LLM would have more extensive values, enabling it to grasp the nuances and associations between words (for example, apple and banana are both a type of fruit, car and truck are both a type of automobile). With these vectors, transformers can process language by focusing on these numerical representations and their interrelations.

For more information on how embeddings are used to make sense of words and sentences, check out [What are Word and Sentence Embeddings?](https://txt.cohere.com/sentence-word-embeddings/) by Luis Serrano. If you&apos;d like to generate embeddings from text you can use [OpenAI&apos;s embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create) (many others also exist).

Now that you&apos;ve been introduced to tokens and embeddings, let&apos;s take a closer look at the role of transformers.

## **Transformers: Assembling Linguistic LEGOs**

Imagine sharing a memory with a group of friends: &quot;She ate the pizza.&quot; But you&apos;re among Spanish speakers! Fear not, the transformer understands each token&apos;s nuance. At their core, transformers are composed of an encoder and a decoder. The encoder reads in an input sequence, like a sentence, and produces an encoded representation of its contents. This compact encoding captures the contextual meaning of the full input. The decoder then takes this encoded representation and generates an output sequence from it.

1. **The Encoder**: Interprets the essence of each English token, keeping the core message intact.

3. **The Decoder**: Building upon this input, it selects Spanish tokens: &quot;Ella comió la pizza.&quot; Something called &quot;attention&quot; guides each token into place (more on this in a moment).  
    

[![](/images/blog/the-abcs-of-ai-transformers-tokens-and-embeddings-a-lego-story/image-19.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-19.png)

Say we want to automatically translate the sentence &quot;She ate the pizza&quot; from English into Spanish. Here are the general transformer steps:

1. An encoder module reads the input sentence tokens and generates an encoded representation of its meaning. This is like a summary of the overall context.  
    

3. Attention layers connect the encoder and decoder, allowing the decoder to focus on relevant words in the original sentence as it generates the output. The attention mechanism gives transformers their ability to draw global dependencies between input and output.  
    

5. A decoder module takes this encoded context and outputs the Spanish translation word-by-word: &quot;Ella comió la pizza&quot;.

## Attention: The Transformer&apos;s Guide

Picture the encoder and decoder modules as being like towering, multilayer LEGO creations. Each layer incrementally processes the input tokens in a more complex way. Both the encoder and decoder are composed of smaller building blocks stacked on top of each other. Each block applies layers of multi-headed self-attention and feedforward neural networks to the data.

[![](/images/blog/the-abcs-of-ai-transformers-tokens-and-embeddings-a-lego-story/image-14.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-14.png)

Here&apos;s what&apos;s inside these LEGO layers:

- **Multi-headed Attention**: These are our diligent LEGO designers. Just as designers consider how each LEGO piece relates to the others, multi-headed attention assesses how each token interacts with every other token in the sentence. For instance, it might recognize the tight relationship between &quot;She&quot; and &quot;ate&quot;, or how &quot;ate&quot; connects with &quot;pizza&quot;, capturing the broader context of the sentence.  
    

- **Feed-forward Networks**: Picture these as the meticulous LEGO builders. After the designers (multi-headed attention) provide the blueprint, these builders refine it. They work on each token in parallel, ensuring they fit seamlessly within the overall structure. This could mean adjusting a token&apos;s representation to better mesh with its neighboring tokens, thus refining the entire sentence&apos;s representation.

Collectively, multi-headed attention and feed-forward networks collaborate at each layer, progressively enhancing token representations. It&apos;s similar to stacking LEGO layers to build a richer, more detailed structure. Through this process, transformers can grasp intricate linguistic patterns and relationships, allowing for accurate translations and other language completion tasks.

## Summary

That&apos;s a wrap for this post! Is there more to the AI transformer story? Absolutely! However, the goal of this post is to introduce you to the main concepts and help you understand how they fit together. If you&apos;d like additional details I&apos;d recommend making time to read through the articles in the **Additional Resources** section below.  
  
Let&apos;s sum up what was covered:

- Think of tokens as atoms that combine to form sentences or linguistic &quot;molecules.&quot; Analogously, if sentences are LEGO structures, tokens are the individual LEGO bricks. For instance, the simple sentence &quot;She ate the pizza&quot; gets broken down into tokens, each represented by numeric embeddings.  
    

- Embeddings are dense numeric vectors that capture semantic meaning for each token. They bring discrete language into a continuous vector space that transformers can analyze. The embeddings serve as the mathematical language and data representation that transformers operate on.  
    

- Transformers are AI models that excel in tasks like translation and text generation by treating language as building blocks called tokens.  
    

- At the heart of a transformer are an encoder and a decoder. The encoder absorbs a sentence, grasping the essence of its tokens. The decoder then creates an output, like a translated sentence, building on the context captured by the encoder. Throughout this process, mechanisms like multi-headed attention and feed-forward networks inspect and refine tokens, much like LEGO designers and builders perfecting a structure.

In sum, transformers master language by breaking it down into manageable chunks (tokens), giving each chunk a numeric identity (embeddings), and then working with these chunks to produce meaningful outputs.

Found this information useful? Please share it with others and follow me to get updates:

- Twitter – [https://twitter.com/danwahlin](https://twitter.com/danwahlin)

- LinkedIn – [https://www.linkedin.com/in/danwahlin](https://www.linkedin.com/in/danwahlin)

**Additional Resources**

- [The Illustrated Transformer](http://jalammar.github.io/illustrated-transformer/)

- [How GPT models work: accessible to everyone](https://bea.stollnitz.com/blog/how-gpt-works)

- [The Transformer architecture of GPT models](https://bea.stollnitz.com/blog/gpt-transformer/)

- [What are Word and Sentence Embeddings?](https://txt.cohere.com/sentence-word-embeddings/)[](https://pluralsight.pxf.io/c/1191765/1161352/7490)</content:encoded></item><item><title>Getting Started with Azure OpenAI and GPT Models</title><link>https://blog.codewithdan.com/getting-started-with-azure-openai/</link><guid isPermaLink="true">https://blog.codewithdan.com/getting-started-with-azure-openai/</guid><pubDate>Wed, 02 Aug 2023 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/getting-started-with-azure-openai/image-13-1024x614.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-13.png)

## Introduction

In this post, we&apos;ll explore how you can get started using Azure OpenAI. We&apos;ll take a look at setting up a resource using the Azure Portal, learn how to deploy a model, and experiment with the model in Azure OpenAI Studio.

If you&apos;d like to see a quick overview of everything covered in this post, check out the following &quot;getting started&quot; video.

**Getting Started with Azure OpenAI in 6-ish Minutes**

https://www.youtube.com/watch?v=jQyYeYWD97I

Let&apos;s dive right in!

## Step 1: Create an Azure OpenAI Resource

Head over to the [Azure Portal](https://portal.azure.com) and locate the search bar at the top of the page. Search for **Azure OpenAI**, select that option, and then select **Create**.

Once you&apos;ve navigated to the **Create Azure OpenAI** page you can choose your subscription, preferred resource group, and region, and enter a name for the resource. The pricing tier currently has only one option (that&apos;s subject to change in the future).

[![](/images/blog/getting-started-with-azure-openai/image-1024x576.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image.png)

After entering the required information, select the **Next** button until you get to the **Review + submit** screen. Then select **Create**.

## Step 2: Explore the Newly Created Azure OpenAI Resource

Once the deployment completes, you can explore the resource. In your Azure OpenAI resource **Overview** page you&apos;ll find an endpoint. This is used along with a key to add Azure OpenAI capabilities to your apps.

[![](/images/blog/getting-started-with-azure-openai/image-2-1024x576.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-2.png)

Select **Resource Management** --&gt; **Keys and Endpoint** from the left menu and you&apos;ll see the endpoint listed as well as your available keys. Although we won&apos;t be using these in this tutorial, you&apos;ll need them to integrate Azure OpenAI functionality into your custom apps as mentioned earlier.

## Step 3: Explore Azure OpenAI Studio

Azure OpenAI offers various models such as GPT-3 and GPT-4 (currently you have to apply for access to GPT-4). How can you access these? Go to **Resource Management** --&gt; **Model deployments** and select **Manage Deployments**. This will open **Azure OpenAI Studio**.

[![](/images/blog/getting-started-with-azure-openai/image-3-1024x576.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-3.png)

Before diving into the deployments screen, let&apos;s quickly look at a few of the Azure OpenAI Studio features. Azure OpenAI Studio provides several different features such as the **Chat playground** and **Completions playground** to experiment with models you deploy. You can also **Bring your own data** into Azure OpenAI by uploading Word or PowerPoint documents, Markdown files, HTML files, or PDFs. The bring your own data feature combines Azure Cognitive Search with Azure OpenAI models to allow your users to ask questions about your documents using ChatGPT style functionality. In addition to these features, Azure OpenAI Studio also lets you experiment with generating images using the DALL-E playground. Although we won&apos;t be exploring all of these options in this post, take some time to go through them and experiment on your own.

## Step 4: Create a Model Deployment

To create a new Azure OpenAI model deployment you need to select a model. We&apos;ll be using the GPT 3.5 turbo model but you should pick one that&apos;s best for your specific needs (learn more about [available models in the documentation](https://aka.ms/azure-openai-models)). In the **Deployments** screen, select the **Create new deployment** button.

&gt; Note that when you first go from the Azure Portal to Azure OpenAI Studio you may be presented with a dialog to select your subscription and model of choice directly.

[![](/images/blog/getting-started-with-azure-openai/image-6-1024x567.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-6.png)

Select the **gpt-35-turbo** model. In this case we&apos;ll use 0613 for the model version since it supports [OpenAI function calling](https://platform.openai.com/docs/guides/gpt/function-calling). Normally you&apos;ll select the **Auto-update to default** option especially if you want to ensure that the model works correctly with the different playgrounds (more on this in a moment), but we&apos;re going to live on the wild side to explore a few things.

[![](/images/blog/getting-started-with-azure-openai/image-4-1024x576.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-4.png)

After setting the model and version, give your deployment a name. Something like **gpt-35-turbo-0613** works fine. With everything in place you&apos;re ready to select the **Create** button.

## Step 5: Experiment with the Completions Playground

With the model in place, it&apos;s time to see it in action. Let&apos;s jump to the **Completions playground** in Azure OpenAI Studio. Select the model you deployed as well as an example prompt from the list. We&apos;ll choose the **Generate an email** prompt in this case and then select **Generate**.

[![](/images/blog/getting-started-with-azure-openai/image-9-1024x576.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-9.png)

It&apos;s important to note that a newly created model might need a few minutes to be fully deployed, so if you&apos;re trying the playground functionality immediately after creating the model, you may need to wait a little a few minutes (typically 5 minutes) before you start seeing results. Otherwise you may see an error similar to the following:

[![](/images/blog/getting-started-with-azure-openai/image-8-1024x242.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-8.png)

If you see that error, go grab a coffee, and try again in a few minutes.

It&apos;s also important to mention that depending on the model version you use it&apos;s possible that you may see another error in the **Completions playground** since not all model versions work there. If you get an error similar to the following, go back to the deployments screen you were in earlier and create another model as shown earlier. But, this time select _**Auto-update to default**_ for the version.

[![](/images/blog/getting-started-with-azure-openai/image-10.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-10.png)

Wait for the new model to deploy and you should now be able to use the **Completions playground** and see the model in action with the **Generate an email** prompt.

[![](/images/blog/getting-started-with-azure-openai/image-11-1024x577.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/08/image-11.png)

## Conclusion

Congratulations! You&apos;ve just set up and tested a model using Azure OpenAI. Despite a few waiting time issues with fresh models that need to be deployed, and the current non-availability of some model versions in certain playgrounds, Azure OpenAI is a powerful resource to have at your disposal.

Don&apos;t stop there, though. Explore the **Chat playground**, create images in the **DALL-E playground**, and do more to get additional experience working with Azure OpenAI.

If you&apos;re wondering how you integrate Azure OpenAI with your applications using the endpoint and key mentioned earlier, stay tuned for a future video and post on that topic!

Found this information useful? Please share it with others and follow me to get updates:

- Twitter - [https://twitter.com/danwahlin](https://twitter.com/danwahlin)

- LinkedIn - [https://www.linkedin.com/in/danwahlin](https://www.linkedin.com/in/danwahlin)</content:encoded></item><item><title>TypeChat: Define Schemas for Your OpenAI Prompts</title><link>https://blog.codewithdan.com/typechat-define-schemas-for-your-openai-prompts/</link><guid isPermaLink="true">https://blog.codewithdan.com/typechat-define-schemas-for-your-openai-prompts/</guid><pubDate>Sun, 30 Jul 2023 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/typechat-define-schemas-for-your-openai-prompts/2023-07-31_00-28-24.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/07/2023-07-31_00-28-24.png)

Have you ever spent hours carefully crafting the perfect OpenAI prompt, trying to coax your AI model into generating just the right response? As an AI developer, I know that struggle all too well especially when it comes to returning structured JSON. What if there was an easier way to retrieve JSON responses from OpenAI and Azure OpenAI models?

Meet [TypeChat](https://aka.ms/typechat) - an open source project from Microsoft that aims to make working with AI more efficient across different types of AI models by using TypeScript types (called &quot;schemas&quot;). Created by [Anders Hejlsberg](https://twitter.com/ahejlsberg) and team, TypeChat allows you to define structured schemas that are used along with your prompts. This schema-based approach brings more control and accuracy to your AI interactions and allows well-defined JSON structures to be returned.

In this post, I&apos;ll walk you through how TypeChat works and show how you can use it to enhance your AI projects. If you&apos;d like to see a quick overview of TypeChat in action, check out the following &quot;getting started&quot; video.  
  
**Video: Getting Started with TypeChat, Schemas and OpenAI**

https://youtu.be/t4YStIA88Yo

Let&apos;s get started by comparing TypeChat with the OpenAI function calling feature.

## OpenAI Function Calling

OpenAI models such as **gpt-3.5-turbo-0613** support [function calling](https://platform.openai.com/docs/guides/gpt/function-calling) which can be used to retrieve structured JSON data in model responses. Here&apos;s an example of a curl call that demonstrates what an OpenAI function call looks like:

```bash
curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H &apos;Content-Type: application/json&apos; -d &apos;{
  &quot;model&quot;: &quot;gpt-3.5-turbo-0613&quot;,
  &quot;messages&quot;: [
    {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Schedule a meeting with Jane Doe on August 1, 2023 at 1 pm pacific time.&quot;}
  ],
  &quot;functions&quot;: [
    {
      &quot;name&quot;: &quot;create_meeting&quot;,
      &quot;description&quot;: &quot;Create a meeting&quot;,
      &quot;parameters&quot;: {
        &quot;type&quot;: &quot;object&quot;,
        &quot;properties&quot;: {
          &quot;attendee&quot;: {
            &quot;type&quot;: &quot;string&quot;,
            &quot;description&quot;: &quot;Attendee to invite&quot;
          },
          &quot;date&quot;: {
            &quot;type&quot;: &quot;string&quot;,
            &quot;description&quot;: &quot;Meeting date&quot;
          },
          &quot;time&quot;: {
            &quot;type&quot;: &quot;string&quot;,
            &quot;description&quot;: &quot;Meeting time&quot;
          }
        }
      }
    }
  ]
}&apos;
```

Notice that a specific model is being used (gpt-3.5-turbo-0613 in this example) and a function named **create\_meeting** is defined as well as the parameters that the function expects to receive. This type of OpenAI function call returns a response similar to the following:

```json
{
  &quot;id&quot;: &quot;chatcmpl-7iBcirMinlxNhSaWQMjNIqZnj1N3Y&quot;,
  &quot;object&quot;: &quot;chat.completion&quot;,
  &quot;created&quot;: 1690765468,
  &quot;model&quot;: &quot;gpt-3.5-turbo-0613&quot;,
  &quot;choices&quot;: [
    {
      &quot;index&quot;: 0,
      &quot;message&quot;: {
        &quot;role&quot;: &quot;assistant&quot;,
        &quot;content&quot;: null,
        &quot;function_call&quot;: {
          &quot;name&quot;: &quot;create_meeting&quot;,
          &quot;arguments&quot;: &quot;{\n  \&quot;attendee\&quot;: \&quot;Jane Doe\&quot;,\n  \&quot;date\&quot;: \&quot;August 1, 2023\&quot;,\n  \&quot;time\&quot;: \&quot;1 pm\&quot;\n}&quot;
        }
      },
      &quot;finish_reason&quot;: &quot;function_call&quot;
    }
  ],
  &quot;usage&quot;: {
    &quot;prompt_tokens&quot;: 89,
    &quot;completion_tokens&quot;: 39,
    &quot;total_tokens&quot;: 128
  }
}
```

Note that the **arguments** property in the response contains JSON data that matches the parameters defined in the function call (also note that you&apos;d have to deal with timezones on your own - fun, fun).

```json
{ 
  &quot;attendee&quot;: &quot;Jane Doe&quot;,
  &quot;date&quot;: &quot;August 1, 2023&quot;,
  &quot;time&quot;: &quot;1 pm&quot;
}
```

By using the function calling feature you can get back the exact JSON data you desire. However, this only works if you&apos;re using a model that supports function calling. What if you&apos;re not using a model that supports this feature? Or, what if you&apos;d prefer to use TypeScript &quot;schemas&quot; to define the specific JSON data you&apos;d like returned? That&apos;s where TypeChat comes into play!

## Schemas: Moving Beyond Textual Prompts

At its core, TypeChat simplifies and streamlines AI development. You no longer have to rack your brain composing the perfect text prompt. By defining schemas upfront using TypeScript, you can focus on building the overall logic and deliver accurate results faster. TypeChat handles dynamically embedding the schema as it interacts with the AI model. Whether you&apos;re creating tutorials, voice assistants, or any other AI application, TypeChat can help enhance precision and productivity.

With traditional prompt engineering, you provide text prompts and hope the model interprets them correctly. But as we all know, language can be ambiguous even if you&apos;re crystal clear about what you&apos;re expecting. TypeChat provides a more precise method by letting you define schemas that specify the expected input and output types. For example, you can define a schema for a restaurant order containing fields like food items, sides, quantities, and more.

## Getting Started with TypeChat

Interested in trying TypeChat? Get started by checking out the [TypeChat GitHub repository](https://github.com/microsoft/TypeChat). You can clone the repo or run it directly using [GitHub Codespaces](https://docs.github.com/codespaces/overview) (I show how to use Codespaces in the [video overview](//www.youtube.com/watch?v=t4YStIA88Yo) mentioned earlier). The **examples** folder contains several useful code samples to help you understand TypeChat&apos;s capabilities, examine schemas, and learn more about using the TypeChat API.

TypeChat works with both OpenAI and Azure OpenAI, so pick your preferred platform and set up the credentials in the **.env** file as mentioned in the **[examples/readme.md](https://github.com/microsoft/TypeChat/blob/main/examples/README.md#step-3-configure-environment-variables)** file.

## Schema-Based Engineering in Action

Let&apos;s look at how schema engineering improves a real-world use case like taking food orders. Without TypeChat, you&apos;d have to provide text prompts such as the following. Note that this is an overly simplistic prompt.

&gt; Take this food order from the customer: a large cheese pizza and a side of breadsticks. Convert the order into JSON data that looks like the following:  
&gt;   
&gt; { &quot;items&quot;: \[ { &quot;name&quot;: &quot;&quot;, &quot;size&quot;: &quot;&quot;, &quot;toppings&quot;: \[&quot;&quot;\] }\], &quot;sides&quot;: \[&quot;&quot;\] }  
&gt;   
&gt; Only return JSON data and no other text content.

While OpenAI models are generally good at following rules defined in a prompt, you can certainly get unexpected results at times. Even though the previous prompt asks for JSON, it&apos;s possible to get back text as well as the JSON data you&apos;re expecting.  
  
**Note:** This challenge is covered in an Azure OpenAI tutorial available at [https://aka.ms/openai-acs-msgraph](https://aka.ms/openai-acs-msgraph) if you&apos;re interested in learning more about working with OpenAI Large Language Models (LLMs) and prompts.  
  
With TypeChat, a schema can be used to define the expected model output structure upfront:

```json
// An order from a restaurant that serves pizza, beer, and salad
export type Order = {
    items: (OrderItem | UnknownText)[];
};

export type OrderItem = Pizza | Beer | Salad | NamedPizza;

export type Pizza = {
    itemType: &apos;pizza&apos;;
    // default: large
    size?: &apos;small&apos; | &apos;medium&apos; | &apos;large&apos; | &apos;extra large&apos;;
    // toppings requested (examples: pepperoni, arugula)
    addedToppings?: string[];
    // toppings requested to be removed (examples: fresh garlic, anchovies)
    removedToppings?: string[];
    // default: 1
    quantity?: number;
    // used if the requester references a pizza by name
    name?: &quot;Hawaiian&quot; | &quot;Yeti&quot; | &quot;Pig In a Forest&quot; | &quot;Cherry Bomb&quot;;
};

export interface NamedPizza extends Pizza {
}

export type Beer = {
    itemType: &apos;beer&apos;;
    // examples: Mack and Jacks, Sierra Nevada Pale Ale, Miller Lite
    kind: string;
    // default: 1
    quantity?: number;
};

export const saladSize = [&apos;half&apos;, &apos;whole&apos;];

export const saladStyle = [&apos;Garden&apos;, &apos;Greek&apos;];

export type Salad = {
    itemType: &apos;salad&apos;;
    // default: half
    portion?: string;
    // default: Garden
    style?: string;
    // ingredients requested (examples: parmesan, croutons)
    addedIngredients?: string[];
    // ingredients requested to be removed (example: red onions)
    removedIngredients?: string[];
    // default: 1
    quantity?: number;
};
```

Once the schema is created, you feed it into the TypeChat API. Here&apos;s a quick summary of the key API functions used to combine a schema with a user prompt and send it to an LLM. The following code is from the **restaurants** example in the [TypeChat repo](https://github.com/microsoft/TypeChat).

```typescript
// Create an OpenAI or Azure OpenAI model object depending on 
// the properties defined in the .env file
const model = createLanguageModel(process.env);

// Read in the schema to use
const viewSchema = fs.readFileSync(
  path.join(__dirname, &quot;foodOrderViewSchema.ts&quot;),
  &quot;utf8&quot;
);

// Create a TypeChat JSON translator
const translator = createJsonTranslator&lt;Order&gt;(model, viewSchema, &quot;Order&quot;);

// Call the JSON translator&apos;s translate() function and pass
// the user request (restaurant order in this example)
const response = await translator.translate(request);
if (!response.success) {
  console.log(response.message);
  return;
}
const order = response.data;
if (order.items.some((item) =&gt; item.itemType === &quot;unknown&quot;)) {
  console.log(&quot;I didn&apos;t understand the following:&quot;);
  for (const item of order.items) {
    if (item.itemType === &quot;unknown&quot;) console.log(item.text);
  }
}
printOrder(order);
```

The [translate() function](https://github.com/microsoft/TypeChat/blob/main/src/typechat.ts#L92) calls a **createRequestPrompt()** function internally to combine the user prompt with the schema:

```typescript
function createRequestPrompt(request: string) {
    return `You are a service that translates user requests into JSON objects of type &quot;${validator.typeName}&quot; according to the following TypeScript definitions:\n` +
        `\`\`\`\n${validator.schema}\`\`\`\n` +
        `The following is a user request:\n` +
        `&quot;&quot;&quot;\n${request}\n&quot;&quot;&quot;\n` +
        `The following is the user request translated into a JSON object with 2 spaces of indentation and no properties with the value undefined:\n`;
}
```

## Handling Ambiguous Input

TypeChat can help a model return the expected JSON data. No more guessing how the AI will interpret your prompts. However, challenges still arise from time to time.

One of TypeChat&apos;s superpowers is gracefully handling unknown or ambiguous terms in natural language. For example, let&apos;s say a customer orders a &quot;Boar in a Blanket&quot; pizza that&apos;s not on the menu. TypeChat works with the AI model to extract the relevant details from an unfamiliar prompt and provide an accurate response based on the defined schema. Unknown items are defined in the schema as shown next:

```typescript
// Use this type for order items that match nothing else
export interface UnknownText {
    itemType: &apos;unknown&apos;,
    text: string; // The text that wasn&apos;t understood
}
```

This ability to process fuzzy natural language makes TypeChat ideal for voice assistants, chatbots, and more where interpreting user intent isn&apos;t always an exact science.

## Validating Output

In addition to using the input schema with the user prompt, TypeChat also handles validating the model&apos;s responses against your defined schema. This gives you an extra layer of control and ensures the output matches your specifications.

For example, you can catch missing fields or data type mismatches during validation. This helps enhance the reliability of your AI system. Once a response is received from a model, the **translate()** function calls the [following code to validate the response](https://github.com/microsoft/TypeChat/blob/main/src/typechat.ts#L107) against the schema:

```typescript
const validation = validator.validate(jsonText);
if (validation.success) {
    return validation;
}
if (!attemptRepair) {
    return error(`JSON validation failed: ${validation.message}\n${jsonText}`);
}
```

## Summary

Although using textual prompts works well in many scenarios, when you need to receive structured JSON data back from a model, TypeChat provides an efficient and clean way to do that using schemas. It works across different types of models so you don&apos;t have to worry about a model supporting a specific feature (other than AI completion capabilities). Check out the TypeChat docs to learn more about how to get started using it and watch the [Getting Started with TypeChat, Schemas and OpenAI](https://www.youtube.com/watch?v=t4YStIA88Yo) video to see it in action.

As mentioned earlier, [OpenAI function calling](https://platform.openai.com/docs/guides/gpt/function-calling) can also be used to return specific JSON data although it requires that you use a model that supports that feature. It&apos;s worth exploring so that you understand all of the available options.

[TypeChat is an open source project](https://github.com/microsoft/TypeChat) actively maintained by Microsoft. Join the growing community on GitHub to share ideas and shape the future of the technology.

Found this information useful? Please share it with others and follow me to get updates:

- Twitter - [https://twitter.com/danwahlin](https://twitter.com/danwahlin)

- LinkedIn - [https://www.linkedin.com/in/danwahlin](https://www.linkedin.com/in/danwahlin)</content:encoded></item><item><title>Integrate OpenAI, Communication, and Organizational Data Features into Your Apps</title><link>https://blog.codewithdan.com/integrate-openai-communication-and-organizational-data-features-into-your-apps/</link><guid isPermaLink="true">https://blog.codewithdan.com/integrate-openai-communication-and-organizational-data-features-into-your-apps/</guid><pubDate>Tue, 11 Jul 2023 00:00:00 GMT</pubDate><content:encoded>Artificial intelligence, communication, and organizational data are three pillars that can help take your custom Line of Business (LOB) applications to the next level. Each pillar brings unique capabilities to the table and enhances the functionality, usability, and productivity of applications and users. In this post you&apos;ll learn more about these three pillars and how they can be integrated into apps using Microsoft Cloud services such as Azure OpenAI, Azure Communication Services (ACS), and Microsoft Graph. After you learn about the pillars, you can get hands-on experience provisioning cloud services and working with code in a hands-on tutorial available at [https://aka.ms/openai-acs-msgraph](https://aka.ms/openai-acs-msgraph).   
  
To get started, let&apos;s look at a high-level overview of the application.

It&apos;s composed of the following parts:  

- A front-end application that handles rendering the UI.

- A back-end API that provides data and other functionality to the front-end.

- A PostgreSQL database that stores customers, orders, and reviews.

- Microsoft Cloud services such as Azure OpenAI, Azure Communication Services, and Microsoft Graph.

The overall goal of the application is to enhance user productivity by simplifying processes, enhancing communication, and bringing organizational data directly into the application to avoid user context shifts.

If you&apos;d like to see the app in action and understand how AI, Communication, and Organizational are used, check out the following talk I gave at the ng-conf 2023 conference:

**Thinking Outside the Box: Taking Your LOB Apps to the Next Level**

https://www.youtube.com/watch?v=TZnMTICby5E

Let&apos;s break down each of the three pillars used in the application.  

**Integrating AI**  

In the fast moving world of digital transformation, optimizing applications with leading-edge technologies is essential for any business looking to stay competitive. Among these technologies, Artificial Intelligence (AI), efficient communication tools, and seamless integration of organizational data are key factors that redefine user interaction, boost productivity, and simplify processes.  

[Azure OpenAI](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/?WT.mc_id=m365-94501-dwahlin) provides a cutting-edge AI service that can bring powerful features to your applications while adding [privacy and security guarantees](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy). For example, it can convert natural language queries into SQL, making complex databases accessible to non-technical users without the need for intricate SQL knowledge. This not only bridges the knowledge gap but also accelerates decision-making processes. However, it&apos;s essential to apply this type of feature mindfully, considering data privacy and security along the way. The adage, &quot;just because you can doesn&apos;t mean you should&quot; applies to this scenario as well as several others.

  
Azure OpenAI also revolutionizes communication workflows, by generating contextually appropriate email or SMS messages based on user-defined rules. This function fast-tracks the message creation process and maintains consistency across communication, greatly enhancing productivity.

In addition to natural language to SQL and AI completions features, Azure OpenAI can also be [customized to your unique business data](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/use-your-data?WT.mc_id=m365-94501-dwahlin), enabling contextually accurate responses from extensive sources such as product manuals or FAQs. For example, if a company hosts a large collection of product manuals, Azure OpenAI can help users navigate the data by simply asking questions. Instead of scrolling through a large product manual, a customer service agent can ask a question and have Azure OpenAI return the answers they need.  
  
**Integrating Communication**  

When it comes to communication features, [Azure Communication Services (ACS)](https://learn.microsoft.com/azure/communication-services/?WT.mc_id=m365-94501-dwahlin) adds an extra layer of real-time communication capabilities, making it possible to add phone calls, live chat, audio/video calls, and email and SMS messaging into your applications. For example, you might find that users are constantly making phone calls or sending messages as they interact with an application. Why not allow them to do that directly in the app so that they stay focused and avoid context switching? With ACS you can integrate key communication features that users need to collaborate with employees and customers. This can be done using SDKs and components that can help simplify the development process.

  
**Integrating Organizational Data**  
  
In addition to AI and communication features, you can also integrate organizational data stored within companies directly into custom applications using [Microsoft Graph](https://learn.microsoft.com/en-us/graph/overview?WT.mc_id=m365-94501-dwahlin) and Azure Active Directory. This reduces the need for users to switch to their email client, Teams, OneDrive for Business, or other tools and applications to find email messages, chats, files, calendar events, and other pertinent data. This seamless integration of organizational data helps users make informed decisions faster while improving productivity and the user experience along the way.  

**Hands-On Tutorial**  
  
To see these three pillars in action you can explore a new [hands-on tutorial available on Microsoft Learn](https://aka.ms/openai-acs-msgraph). It covers the technologies discussed in this post and provides a [GitHub repository](https://github.com/microsoft/MicrosoftCloud) that you can reference. The tutorial walks you through the process of setting up the required Microsoft Cloud services and discusses the code needed to enable each technology pillar including:  

- **AI**: Enable users to ask questions in natural language and convert their answers to SQL that can be used to query a database, allow users to define rules that can be used to automatically generate email and SMS messages, and learn how natural language can be used to retrieve data from your own custom data sources. Azure OpenAI is used for these features.

- **Communication**: Enable in-app phone calling to customers and Email/SMS functionality using Azure Communication Services.

- **Organizational Data**: Pull in related organizational data that users may need (documents, chats, emails, calendar events) as they work with customers to avoid context switching. Providing access to this type of organizational data reduces the need for the user to switch to Outlook, Teams, OneDrive, other custom apps, their phone, etc. since the specific data and functionality they need is provided directly in the app. Microsoft Graph and Microsoft Graph Toolkit are used for this feature.  
    

**Conclusion**  
  
Harnessing the capabilities of AI, enhancing communication, and integrating organizational data are key to elevating Line of Business (LOB) applications. By using Microsoft Cloud services such as Azure OpenAI, Azure Communication Services (ACS), and Microsoft Graph, you can create more functional, user-friendly, and productive applications. For more information, refer to the [hands-on tutorial](https://aka.ms/openai-acs-msgraph) as well as the following documentation and training content.  

Found this information useful? Please share it with others and follow me to get updates:  

- Twitter - [https://twitter.com/danwahlin](https://twitter.com/danwahlin)

- LinkedIn - [https://www.linkedin.com/in/danwahlin](https://www.linkedin.com/in/danwahlin)

  
**Documentation**  

- [Azure OpenAI Documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/?WT.mc_id=m365-94501-dwahlin)

- [Azure OpenAI on your data](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/use-your-data?WT.mc_id=m365-94501-dwahlin)

- [Azure Communication Services Documentation](https://learn.microsoft.com/en-us/azure/communication-services/?WT.mc_id=m365-94501-dwahlin)

- [Microsoft Cloud for Developers Hub](https://aka.ms/microsoft-cloud)

- [Microsoft Graph Documentation](https://learn.microsoft.com/en-us/graph/overview?WT.mc_id=m365-94501-dwahlin)

- [Microsoft Graph Toolkit Documentation](https://learn.microsoft.com/en-us/graph/toolkit/overview?WT.mc_id=m365-94501-dwahlin)

- [Microsoft Teams Developer Documentation](https://learn.microsoft.com/en-us/microsoftteams/platform/?WT.mc_id=m365-94501-dwahlin)

  
**Training Content**  

- [Apply prompt engineering with Azure OpenAI Service](https://learn.microsoft.com/en-us/training/modules/apply-prompt-engineering-azure-openai//?WT.mc_id=m365-94501-dwahlin)

- [Get started with Azure OpenAI Service](https://learn.microsoft.com/en-us/training/modules/get-started-openai/?WT.mc_id=m365-94501-dwahlin)

- [Introduction to Azure Communication Services](https://learn.microsoft.com/en-us/training/modules/intro-azure-communication-services/?WT.mc_id=m365-94501-dwahlin)

- [Microsoft Graph Fundamentals](https://learn.microsoft.com/en-us/training/paths/m365-msgraph-fundamentals/?WT.mc_id=m365-94501-dwahlin)

- [Video Course: Microsoft Graph Fundamentals for Beginners](https://learn.microsoft.com/en-us/shows/beginners-series-to-microsoft-graph/?WT.mc_id=m365-94501-dwahlin)

- [Explore Microsoft Graph scenarios for JavaScript development](https://learn.microsoft.com/en-us/training/paths/m365-msgraph-scenarios/?WT.mc_id=m365-94501-dwahlin)

- [Explore Microsoft Graph scenarios for ASP.NET Core development](https://learn.microsoft.com/en-us/training/paths/m365-msgraph-dotnet-core-scenarios/?WT.mc_id=m365-94501-dwahlin)

- [Get started with Microsoft Graph Toolkit](https://learn.microsoft.com/en-us/training/modules/msgraph-toolkit-intro/?WT.mc_id=m365-94501-dwahlin)

- [Build and deploy apps for Microsoft Teams using Teams Toolkit for Visual Studio Code](https://learn.microsoft.com/en-us/training/paths/m365-teams-toolkit-vsc/?WT.mc_id=m365-94501-dwahlin)</content:encoded></item><item><title>Docker for Developers: Understanding the Core Concepts</title><link>https://blog.codewithdan.com/docker-for-developers-understanding-the-core-concepts/</link><guid isPermaLink="true">https://blog.codewithdan.com/docker-for-developers-understanding-the-core-concepts/</guid><pubDate>Tue, 27 Jun 2023 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/docker-for-developers-understanding-the-core-concepts/Docker-Logo-1024x576.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/06/Docker-Logo.png)

This post is based on a section from my [Docker for Web Developers course](https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents) on Pluralsight.

## Introduction

Docker and containers in general continue to receive a lot of attention, and it&apos;s well-deserved. But, you may have found yourself wondering, &quot;What exactly is Docker? Can it be useful for developers like me?&quot; When I first encountered Docker at conferences and user group talks, I wondered whether it had a place in my overall work flow and how I&apos;d use it in different environments such as development, staging, and production. But as I dug deeper, I discovered that Docker can significantly impact our development operations.

In this post, I will start by explaining what Docker is and provide clarification on key terms and concepts essential for understanding Docker&apos;s functionality and utilization. Then, I&apos;ll dive into the benefits that Docker offers to developers, along with some of the tools available.

Let&apos;s begin by addressing the fundamental question, &quot;What is Docker?&quot;.

## What Is Docker?

Docker is a lightweight, open, and secure platform for shipping software. That&apos;s the official definition you&apos;ll often come across. However, when I first encountered this general statement, it didn&apos;t immediately resonate with me because there are several technologies that could fit the description of a &quot;lightweight, open, secure platform.&quot; Let&apos;s explore this further.

Docker simplifies the process of building applications, shipping them, and running them in various environments. By environments, I&apos;m referring to development, staging, production, and other on-premises or cloud-based setups you may have.

So, what exactly does Docker include? The primary components are images, containers, and the supporting tools. You may have seen the Docker logo, featuring a whale carrying containers. To understand this analogy better, let&apos;s take a brief look back at the shipping industry&apos;s history.

Back in the old days, there was less standardization for loading and shipping products on ships (you&apos;ve likely seen pictures of the old ships with crates and barrels). It was time intensive and not very productive to get products on and off a ship.

![Schooner, Vintage, Sailing, Sail, Ship, Boat, Sea](/images/blog/docker-for-developers-understanding-the-core-concepts/schooner-487800_960_720.webp)

Today, the major shipping companies have very standardized shipping containers. As a crane positions itself over a ship when it docks, it&apos;s very quick, efficient, and productive to get those containers on and off the ship. If you&apos;re interested, you can read about the [history of shipping containers](https://www.freightos.com/the-history-of-the-shipping-container/) and how [Malcom McLean](https://en.wikipedia.org/wiki/Malcom_McLean) revolutionized the shipping industry.

Docker is very similar. If you think of the old days with ships that had few standards for shipping products around, that&apos;s where development and deployment were for many years. Everyone did it their own way.

Docker provides a consistent way to ship code around to different environments. As a result, it provides several key benefits to developers. As a developer, you can use Windows, Mac, or Linux to leverage Docker in your development workflow and run software on your machine without doing a traditional installation. This is due to Docker&apos;s support for something called &quot;images&quot;.  

## Images and Containers

Docker relies on images and containers:

[![](/images/blog/docker-for-developers-understanding-the-core-concepts/2022-01-11_00-07-11-1024x564.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/01/2022-01-11_00-07-11.png)

An image has the necessary files to run something on an operating system like Ubuntu or Windows. It&apos;ll have your application framework or database, files that support the application, environment variables, security settings, and more. If you&apos;re doing Node.js, ASP.NET Core, PHP, Python, Java, or something else, you&apos;ll have that framework built into the image as well as your application code. You can think of the image as a blueprint that&apos;s used to get a container up and running.

To draw a parallel with shipping, imagine a person creating CAD drawings or blueprints that dictate how the container&apos;s contents will be organized. These blueprints alone are not useful, but they facilitate the creation of container instances and content organization. This process is analogous to creating Docker images.

Specifically, an image is a read-only template with a layered file system. It consists of files specific to the underlying Linux or Windows operating system, framework files, configuration files, and more. All these files are stacked in layers, collectively forming an image.

[![](/images/blog/docker-for-developers-understanding-the-core-concepts/image-1024x332.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/03/image.png)

Once you have an image, you can build a container from it. Returning to the shipping analogy, each container on a ship is isolated from the others. The contents of one container are unknown to the others. Similarly, when an image is created, you can start, stop, and delete containers based on that image. This technology offers the advantage of quickly and easily managing containers in various environments such as development, staging, and production.

[![](/images/blog/docker-for-developers-understanding-the-core-concepts/image-1-1024x253.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/03/image-1.png)

## Where Does Docker Run?

Where does Docker run then? Docker can run natively on Linux or Windows. Linux containers (such as the nginx server) can be run directly on a Linux machine. If you&apos;re running the container on Mac or Windows, you&apos;ll need a virtual machine or in the case of Windows you can leverage the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about) (WSL). Windows containers can be run directly on Windows machines.

Fortunately, Docker services can easily be run on Mac, Windows, or Linux machines using [Docker Desktop](https://www.docker.com/get-started) or another tool such as [Rancher Desktop](https://rancherdesktop.io/). They both provide container management clients (an &quot;engine&quot; if you will) that can be used to work with images and containers.

[![](/images/blog/docker-for-developers-understanding-the-core-concepts/image-2.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/03/image-2.png)

## Docker Containers versus Virtual Machines

You may be wondering about the differences between Docker containers and virtual machines. Virtual machines always run on top of a host operating system. This means that if you have a host running Linux or Windows, you can run a guest operating system on it using a hypervisor. The following image illustrates this:

[![](/images/blog/docker-for-developers-understanding-the-core-concepts/image-3-1024x572.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/03/image-3.png)

On the left side of the image, there&apos;s App 1, an ASP.NET Core app with its binaries and libraries. You&apos;ll also see App 2 running a different guest operating system and a different application. Let&apos;s assume the guest OS on the left is Windows, and the one on the right is Linux. In this setup, each virtual machine contains a complete copy of the operating system, resulting in significant overhead in terms of storage and memory. Starting and stopping a virtual machine can also be time-consuming, depending on the available server resources.

In contrast, Docker containers also run on top of a host operating system, whether it&apos;s Linux or Windows server. The host requires a container engine like Docker Engine to integrate the containers with the host OS. In the right part of the previous image, the host operating system represents the ship, capable of carrying multiple containers (e.g., App 1 and App 2). While App 1 and App 2 may be completely different containers, they don&apos;t require duplicating entire guest operating systems like in the case of virtual machines. You can run containers for your database, caching server, application code, framework, and more. Each container has its own CPU utilization, memory, and storage requirements, but you avoid the overhead of running multiple operating systems.

Now that we&apos;ve covered images, containers, and their runtime environments, let&apos;s explore how Docker can benefit us as web developers.

## Docker Benefits for Developers

Docker offers various benefits to web developers, and in this section, we&apos;ll discuss some of the key advantages we can leverage. Whether you work individually or as part of a team, Docker can expedite the setup of development environments. While this benefit may seem minor, it significantly aids web developers. Additionally, Docker can help eliminate application conflicts. If you encounter compatibility issues when trying to upgrade to the latest framework version, isolated containers can provide a solution. Furthermore, Docker enables the seamless transfer of code and its entire environment across different environments, such as development, staging, and production. Ultimately, these advantages contribute to faster software shipping. Let&apos;s walk through a few of the key benefits in more detail.

**Accelerating Developer Onboarding**  
  
When working with teams that include developers, designers, or those with hybrid roles, it&apos;s crucial to have everyone working on the actual application rather than separate prototypes. Typically, a project involves web servers, database servers, caching servers, and more. Setting up these components on individual developer machines, especially for remote team members, can be challenging. It requires careful configuration, ensuring security, and managing version compatibility. Docker simplifies this process by allowing the creation of one or more images that can be transformed into running containers. These containers can then be deployed across different developer and designer machines.

**Eliminating App Conflicts**  
  
Often, an application runs on a specific framework version, and upgrading to the latest version becomes problematic due to compatibility concerns with other applications on production servers. Docker resolves this issue through isolated containers. Each container can house a specific version of the framework, providing a conducive environment for multiple applications. For instance, App 1, 2, and 3 can run smoothly in their respective containers, each targeting a different version of the framework. With Docker, managing versioning and app conflicts becomes significantly easier, even if your framework lacks robust versioning capabilities.

**Consistency Between Environments**  
  
Inconsistencies between development and staging environments can lead to unexpected surprises and additional development work. I recall a project from around the year 2000 when I encountered these types of challenges. The development environment and staging environment were supposed to be identical, but they turned out to be different in subtle ways. As a result, we had to rework parts of the code when transitioning from development to staging. Docker mitigates these surprises by allowing the seamless transfer of images to different environments. By ensuring that an application runs consistently across development, staging, and production, Docker eliminates many potential issues.

**Shipping Software Faster**  
  
By leveraging Docker&apos;s container isolation, consistent development environments, and other benefits discussed earlier, we gain the ability to ship code faster. Ultimately, software development is about productivity, high quality, predictability, and consistency. As we transfer images between development, staging, and production environments and set up the corresponding containers, we can harness Docker&apos;s advantages to expedite the software shipping process.

Referring back to our earlier discussion of the shipping industry&apos;s transformation, they increased their loading/unloading productivity by introducing standardized containers. Docker does something similar by simplifying the process of shipping code, frameworks, databases, and more across environments (and cloud providers) in much the same way.

## Installing Docker Desktop

Before diving into using Docker functionality, you&apos;ll want to install [Docker Desktop](https://www.docker.com/). Here&apos;s some information about installing and running Docker Desktop on Windows, Linux, and Mac.  
  
**Windows**  
  
To install [Docker Desktop on Windows](https://docs.docker.com/desktop/install/windows-install/), the following system requirements must be met.

- You need to have the WSL 2 backend (or the Hyper-V backend with Windows containers).

- For the WSL 2 backend, you should have WSL version 1.1.3.0 or above, and you must be running Windows 11 64-bit (Home/Pro version 21H2 or higher) or Windows 10 64-bit (Home/Pro 21H2 or higher).

- Enable the WSL 2 feature on Windows. Hardware prerequisites for running WSL 2 include a 64-bit processor with SLAT, 4GB system RAM, and BIOS-level hardware virtualization support.

- Install the Linux kernel update package.

- Note that Docker Desktop on Windows is only supported on Windows versions within Microsoft&apos;s servicing timeline. Keep in mind that containers and images created with Docker Desktop are shared across all user accounts on the machine, except when using the Docker Desktop WSL 2 backend.

**Linux**

To install [Docker Desktop for Linux](https://docs.docker.com/desktop/install/linux-install/), the following system prerequisites must be met.

- Docker Desktop for Linux runs a custom docker context called &quot;desktop-linux&quot; and does not have access to images and containers deployed on the Linux Docker Engine before installation.

- Supported platforms include Ubuntu, Debian, and Fedora, with experimental support for Arch-based distributions.

- Docker supports the current LTS release and the most recent version of the mentioned distributions, discontinuing support for the oldest version when new versions are available.

- General system requirements for Docker Desktop on Linux include a 64-bit kernel with CPU support for virtualization, KVM virtualization support, QEMU version 5.2 or newer, systemd init system, Gnome, KDE, or MATE Desktop environment, at least 4 GB of RAM, and enabling ID mapping in user namespaces.

- Running Docker Desktop in nested virtualization scenarios is not supported by Docker, so it&apos;s recommended to run it natively on supported distributions.

**Mac**

To install [Docker Desktop for Mac](https://docs.docker.com/desktop/install/mac-install/), the following system prerequisites must be met.

Mac with Intel chip:

- macOS version 11 (Big Sur), 12 (Monterey), or 13 (Ventura) is required, and it&apos;s recommended to have the latest macOS version.

- Docker supports the current release of macOS and the previous two releases, discontinuing support for the oldest version when new major macOS versions are available.

- At least 4 GB of RAM is required.

- VirtualBox version prior to 4.3.30 should not be installed as it is incompatible with Docker Desktop.

Mac with Apple silicon:

- Starting from Docker Desktop 4.3.0, Rosetta 2 is no longer a hard requirement, but it&apos;s recommended for the best experience. Optional command line tools may still require Rosetta 2 on Darwin/AMD64.

- To install Rosetta 2 manually from the command line, run the provided command: softwareupdate --install-rosetta

## Getting Started with Docker Commands

Once Docker Desktop is installed and the engine is running on your machine, you can open a command window and type the following command:

```bash
docker
```

This will output a list of different commands that can be run:

[![Docker commands](/images/blog/docker-for-developers-understanding-the-core-concepts/image-1024x641.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/06/image.png)

While a complete discussion of available commands is outside the scope of this post, here are a few core commands to know about. It&apos;s important to note that there are &quot;management commands&quot; that can be used as well as &quot;core commands&quot; in some cases. I tend to go with the core commands (old habits die hard), but some people prefer the management commands. You can learn more about the available commands at [https://docs.docker.com/engine/reference/commandline/cli/](https://docs.docker.com/engine/reference/commandline/cli/).

```bash
# Pull the nginx:alpine image from Docker Hub 
# (a place to store images) to your machine
docker pull nginx:alpine

# Build docker a docker image from a Dockerfile
docker build -t myImage:1.0 .

# List docker images on your machine
docker images

# Run a container on port 8080.
# Visit http://localhost:8080 to view it
docker run -d -p 8080:80 nginx:alpine

# List all running containers
docker ps

# List all containers
docker ps -a

# Stop a container
docker stop [containerId | containerName]

# Remove a container
docker rm [containerId | containerName]

# Remove an image
docker rmi [imageId]
```

Looking through the above commands, you might wonder how an image is built. Custom images are built by using a special file called a Dockerfile. Think of it as a list of instructions that determines what goes into the container that will eventually run (code, security settings, configuration, framework, and more). Here&apos;s an example of a simple Dockerfile that can be used to build a custom nginx image.

```
FROM        nginx:alpine
LABEL       author=&quot;Your Name&quot;
WORKDIR     /usr/share/nginx/html
COPY        . /usr/share/nginx/html

# Could do &quot;COPY . .&quot; as well since working directory is set

EXPOSE      80
CMD         [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;]
```

This Dockerfile defines the latest version of nginx:alpine as the foundation to use, adds a label, sets the working directory (the directory in the container), and then copies the code from the local machine into the image (copies it to the /usr/share/nginx/html directory). Finally, it exposes port 80 in the container and runs the &quot;nginx&quot; command along with some command-line flags.

To build this image, you would navigate to the directory where the Dockerfile file lives (Dockerfile doesn&apos;t have an extension by default), and run the following command:

```bash
docker build -t myCustomNginx:1.0 .
```

- \-t is the tag to use which is myCustomNginx

- 1.0 represents the version of the image (it&apos;s very important to version your images)

- . represents the path to the Dockerfile used to build the image. In this example it&apos;s in the same directory where we&apos;re running the &quot;docker build&quot; command.

While there&apos;s a lot more to cover, you&apos;ve now seen some of the core commands, seen a Dockerfile, and learned how it can be used by the **docker build** command to create a new image. Once an image is created, you can push it to a registry such as Docker Hub, Azure Container Registry, Elastic Container Registry, or others.

```bash
# Push the image to Docker Hub (the default registry)
docker push myCustomNginx:1.0
```

Someone else could then run a **docker pull myCustomNginx:1.0** to pull the image to their machine/server and then use the **docker run** command to start the container. In a production scenario, the image could be pulled into a cloud service capable of running containers such as:

- [Azure Container Instances](https://learn.microsoft.com/azure/container-instances/container-instances-overview)

- [Azure Container Apps](https://learn.microsoft.com/azure/container-apps/overview)

- [Azure App Service](https://learn.microsoft.com/azure/app-service/quickstart-custom-container?tabs=dotnet&amp;pivots=container-linux-vscode)

- [Azure Kubernetes Service](https://learn.microsoft.com/azure/aks/intro-kubernetes)

## Summary

In this post, you&apos;ve learned what Docker is and how it simplifies the process of building, shipping, and running applications across different environments. You learned that Docker runs natively on Linux and Windows but is distinct from virtual machines, offering improved efficiency and speed.

For developers, Docker provides numerous benefits. It enables rapid setup of development environments, ensuring consistency across different machines and operating systems. Docker eliminates app conflicts by utilizing isolated containers, allowing the simultaneous execution of multiple application versions and frameworks. Additionally, Docker facilitates the seamless transfer of code and its environment between various environments, such as development, staging, and production. By leveraging Docker, you can expedite software shipping, benefiting from container isolation, consistent development environments, and improved versioning management. Drawing inspiration from the shipping industry&apos;s adoption of standardized containers, Docker revolutionizes how we ship software components, frameworks, databases, and more across diverse environments.

If you&apos;re interested in diving deeper into Docker and learning how to work with images, containers, networks, volumes, running multiple containers, and more, check out my [Docker for Web Developers course](https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents) on Pluralsight.</content:encoded></item><item><title>Solving the Puzzle of Real Time Collaboration using the Fluid Framework</title><link>https://blog.codewithdan.com/solving-the-puzzle-of-real-time-collaboration-using-the-fluid-framework/</link><guid isPermaLink="true">https://blog.codewithdan.com/solving-the-puzzle-of-real-time-collaboration-using-the-fluid-framework/</guid><pubDate>Fri, 27 Jan 2023 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/solving-the-puzzle-of-real-time-collaboration-using-the-fluid-framework/2023-01-27_12-58-07-1024x523.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/01/2023-01-27_12-58-07.png)

I had the opportunity to talk at [ng-conf](https://ng-conf.org/) (one of my favorite conferences of all time!) about how the [Fluid Framework](https://fluidframework.com) can be used to add real-time collaboration into your custom apps. In the talk I cover:

- What you can build using real-time collaboration technologies

- How to get started using the Fluid Framework

- The role of SharedMap and how it&apos;s similar to using the JavaScript Map object

- How the Fluid Framework can be added to a custom Angular app

Check out the video from the talk below or visit [https://fluidframework.com](https://fluidframework.com) to access:

- Project [documentation](https://fluidframework.com/docs/)

- The project&apos;s [Github repository](https://github.com/microsoft/FluidFramework)

- The [quick start guide](https://fluidframework.com/docs/start/quick-start/) and [samples](https://github.com/microsoft/fluidexamples).

The Angular sample I show in the talk can be found at [https://github.com/DanWahlin/angular-todo-list-fluid](https://github.com/DanWahlin/angular-todo-list-fluid).

https://youtu.be/4UTqBqwN6Mw</content:encoded></item><item><title>Maximize Your Company&apos;s Productivity and Potential with the Power of Real-Time Collaboration &amp; Communication</title><link>https://blog.codewithdan.com/maximize-your-companys-productivity-and-potential-with-the-power-of-real-time-collaboration-communication/</link><guid isPermaLink="true">https://blog.codewithdan.com/maximize-your-companys-productivity-and-potential-with-the-power-of-real-time-collaboration-communication/</guid><pubDate>Tue, 03 Jan 2023 00:00:00 GMT</pubDate><content:encoded>[![Two people collaborating together. This is much more challenging with employees (and customers) working remotely.](/images/blog/maximize-your-companys-productivity-and-potential-with-the-power-of-real-time-collaboration-communication/image-1024x683.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/01/image.png)

If you&apos;re like most people, you use several custom applications every day at work to get things done. You&apos;re entering data, viewing issues, editing and checking in code, making calls to others to verify process rules, and more. When was the last time you sat down and gave some serious thought to how these applications could be taken to the &quot;next level&quot; though? For example, what if team members could collaborate on the same content and communicate with each other in real-time **directly from your app** (using chat, audio and/or video) without having to switch to another app? Could this same overall concept be applied to applications your customers use to enhance their interactions with your company?

One of the key benefits of real-time data collaboration is the ability for multiple team members to work on content together. With traditional methods, multiple versions of data or a document may be created which ultimately leads to confusion, errors, and a general decrease in productivity. Real-time collaboration allows team members to collaborate on the same content simultaneously, reducing the need to resolve data integrity issues. While some of this work can be done using tools such as Word/Excel Online, Google Docs/Sheets, and other solutions, do you have any custom apps where this type of functionality could be embedded directly in the app?

For example, many years ago I worked on an application that allowed a credit card company to adjust interest rates charged to banks. As a user signed into the app and changed a given interest rate in a data grid, it was important for other users in the app to know about the change since it could impact their decision. Back in those days we used concurrency techniques to manage the problem and notified users about changes others were making. With real-time collaboration, users would be able to see everything &quot;live&quot; and make adjustments to their data as required. Data conflicts still have to be resolved, but users can make more informed decisions directly in the app.

In addition to providing the ability to work on content in real-time, today&apos;s &quot;work from home&quot; environment requires that employees communicate and collaborate with team members and customers in real-time. Tools such as Microsoft Teams, Zoom, as well as a slew of other viable options provide this functionality, but what if you could enable this functionality directly in an app to minimize user context shifts as they navigate back and forth between applications? One option would be to pull your app into the communication tool itself. Microsoft Teams (and others) provides support for this functionality (see the [Microsoft Teams App Camp](https://microsoft.github.io/app-camp/) site for a robust set of workshop content that shows how to do this). If you don&apos;t want to pull your app into another tool, you can add real-time collaboration directly into the custom app.

For example, assume that a field rep is working remotely to solve a problem and accessing your company app directly on a their tablet. As they have questions they can make a call back to the company directly from the app to get help or report their progress. This simplifies the process for them and reduces the number of context shifts they have to make (from tablet to phone or even from one app to another app on the tablet).

Overall, the addition of real-time collaboration and communication functionality to custom applications can greatly benefit teams, companies, and even customers by improving productivity, enhancing communication and collaboration, and increasing flexibility.

So where do you get started? Here are a few solutions I&apos;ve been working with lately that provide the functionality you&apos;d need.

## Real-Time Data Collaboration

If you&apos;re interested in adding real-time data collaboration into your apps, check out the Fluid Framework at [https://fluidframework.com](https://fluidframework.com). It provides a robust option for embedding real-time data collaboration functionality into apps.

[![](/images/blog/maximize-your-companys-productivity-and-potential-with-the-power-of-real-time-collaboration-communication/image-3-1024x560.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/01/image-3.png)

Their [Quick Start app](https://fluidframework.com/docs/start/quick-start/) provides a simple way to get started and learn the basics.

A more robust sample application that uses the Fluid Framework and other Azure/Microsoft 365 technologies to embed real-time data collaboration into a React app can be found here:  
  
[https://github.com/microsoft/brainstorm-fluidframework-m365-azure](https://github.com/microsoft/brainstorm-fluidframework-m365-azure)

[![](/images/blog/maximize-your-companys-productivity-and-potential-with-the-power-of-real-time-collaboration-communication/BrainstormAppDemo.gif)](https://blog.codewithdan.com/wp-content/uploads/2023/01/BrainstormAppDemo.gif)

## Real-Time Chat/Audio/Video/Phone Collaboration

If you&apos;re interested in adding real-time chat and/or audio/video (and more) collaboration directly into your apps, check out [Azure Communication Services](https://learn.microsoft.com/en-us/azure/communication-services/overview) (ACS). It can be used to enable users to call each other in an application, chat, send email and SMS messages, and even make calls directly from an app to phones. You can watch an overview video about ACS here:

https://youtu.be/chMHVHLFcao

A hands-on tutorial that shows how to use ACS to make an audio/video call from an application directly into a Microsoft Teams meeting (including dynamically setting up the meeting using Microsoft Graph) can be found on the [Microsoft Cloud Integration Scenarios](https://microsoft.github.io/MicrosoftCloud/) site:

[https://microsoft.github.io/MicrosoftCloud/tutorials/docs/ACS-to-Teams-Meeting/](https://microsoft.github.io/MicrosoftCloud/tutorials/docs/ACS-to-Teams-Meeting/)

[![](/images/blog/maximize-your-companys-productivity-and-potential-with-the-power-of-real-time-collaboration-communication/image-2-1024x479.webp)](https://blog.codewithdan.com/wp-content/uploads/2023/01/image-2.png)

## Conclusion

While not every application needs real-time collaboration and communication functionality, there are many apps that can benefit from enhancing employee and customer interactions. Explore some of the options above and see if they&apos;re a potential fit to help take your apps to the next level.</content:encoded></item><item><title>Use Power Automate to Retrieve Data from an Azure Function for Reporting</title><link>https://blog.codewithdan.com/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/</link><guid isPermaLink="true">https://blog.codewithdan.com/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/</guid><pubDate>Fri, 21 Oct 2022 00:00:00 GMT</pubDate><content:encoded>In a [previous post](https://blog.codewithdan.com/migrating-a-local-node-script-to-azure-functions-using-vs-code/) I showed how to convert a local Node script into an [Azure Function](https://learn.microsoft.com/en-us/azure/azure-functions/) so that it can be called from anywhere. While that solution provides a great (and cost effective) way to call the script using HTTP, I also needed to automate the calls and add the data into a spreadsheet for reporting purposes.

This post explores how to automate the process using [Power Automate](https://powerautomate.microsoft.com/en-us/). If you haven&apos;t used Power Automate before it&apos;s part of the Power Platform suite of tools that includes [Power Platform](https://powerapps.microsoft.com/en-us/), [Power Pages](https://powerpages.microsoft.com/), [Power Virtual Agents](https://powervirtualagents.microsoft.com/), and [Power BI](https://powerbi.microsoft.com/en-us/).

## Creating a Power Automate Flow

The final version of the automation flow I created looks like the following:

&lt;figure&gt;

[![Power Automate flow](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-2-516x1024.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-2.png)

&lt;figcaption&gt;

Power Automate Flow

&lt;/figcaption&gt;

&lt;/figure&gt;

It performs the following steps:

- Schedules a task to run every day at a specific time.

- Makes an HTTP call to an Azure Function.

- Parses the JSON data returned from the function call.

- Adds each item from the JSON array to an Excel Online spreadsheet.

To get started, I performed the following steps:

1. Signed in to [https://make.powerautomate.com](https://make.powerautomate.com).

3. Selected my &quot;environment&quot; in the upper-right corner of the screen. You may have multiple environments to choose from if you&apos;re using a work account.

5. Selected the **Create** item in the left menu.

From there I chose **Scheduled cloud flow** from the available templates:

&lt;figure&gt;

[![Scheduled flow template](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-3.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-3.png)

&lt;figcaption&gt;

Scheduled flow template

&lt;/figcaption&gt;

&lt;/figure&gt;

In the dialog that appeared I named my flow, defined how often it would run, and then selected the **Create** button.

&lt;figure&gt;

[![Building a flow based on a schedule](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-4.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-4.png)

&lt;figcaption&gt;

Building a flow based on a schedule

&lt;/figcaption&gt;

&lt;/figure&gt;

## Adding an HTTP Action

After selecting the **Create** button, Power Automate automatically added the first step of the flow for me. Since I told it to run every day at a specific time, it configured that information for me:

&lt;figure&gt;

[![The Recurrence action automatically added by Power Automate](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-5.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-5.png)

&lt;figcaption&gt;

The Recurrence action automatically added by Power Automate

&lt;/figcaption&gt;

&lt;/figure&gt;

The next step in the flow involves calling the Azure Function to retrieve the data needed for reporting. To make that happen, I clicked the **Next step** button and typed &quot;http&quot; into the search box. I then selected the **HTTP action** from the options.

&lt;figure&gt;

[![Selecting the HTTP action](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-6.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-6.png)

&lt;figcaption&gt;

Selecting the HTTP action

&lt;/figcaption&gt;

&lt;/figure&gt;

**NOTE:** The HTTP action is a &quot;premium&quot; feature and requires the proper license. While licensing is beyond the scope of this post, you can find more details in [this document](https://go.microsoft.com/fwlink/?LinkId=2085130&amp;clcid=0x409).

After selecting the HTTP action, you can enter the **method** and **URI** that should be used for the API call. My scenario was quite simple:

- **Method:** GET

- **URI:** https://&lt;my-azure-function-domain&gt;/api/getGitHubRepoStats

The Azure Function for my HTTP call didn&apos;t require authentication (it has publicly available data) so no authentication was needed. Nice and simple. It also didn&apos;t require any specialized headers or queries. In cases where you have to do something more involved, you can learn more about the various options at [https://learn.microsoft.com/en-us/power-automate/desktop-flows/actions-reference/web](https://learn.microsoft.com/en-us/power-automate/desktop-flows/actions-reference/web).

## Adding the Parse JSON Action

After entering my method and URI into the HTTP action, I needed a way to access the JSON data returned from the Azure Function and parse it. To handle that I selected the **New step** button, searched for &quot;json&quot;, and selected the **Parse JSON** action:

&lt;figure&gt;

[![Selecting the Parse JSON action](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-7.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-7.png)

&lt;figcaption&gt;

Selecting the Parse JSON action

&lt;/figcaption&gt;

&lt;/figure&gt;

Once the Parse JSON action dialog appeared I performed the following tasks:

- Select the **Content** property and pick **Body** from the options. I want to parse the body of the message returned from the Azure Function call.

- Select the **Generate from sample** button and enter the JSON returned from the Azure Function call. This automatically generates a schema and adds it to the **Schema** property of the Parse JSON action. That&apos;s a super nice feature as you&apos;ll see in the next action that&apos;s added.

&lt;figure&gt;

[![Generating a schema from JSON data in the Parse JSON action](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-8.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-8.png)

&lt;figcaption&gt;

Generating a schema from JSON data in the Parse JSON action

&lt;/figcaption&gt;

&lt;/figure&gt;

## Adding the Apply to each Action

Up to this point you&apos;ve seen how to call an HTTP API and parse the JSON. The next step is to store the data somewhere which means iterating through the JSON array that&apos;s returned from the API call. To handle that you can use the **Control** **actions** provided by Power Automate.

I selected the **Next step** button again, typed &quot;apply&quot; in the search box, and selected the [**Apply to each action**](https://learn.microsoft.com/en-us/power-automate/apply-to-each). How did I know to select that? The first time I used Power Automate awhile back I didn&apos;t know, so I had to resort to my favorite search engine. But, once you know about it it&apos;s easy to find and use.

The **Apply to each action** dialog will ask you to select an output from the previous step. You can select **Body** from the Parse JSON options in this case since we want to access the JSON object. That gets us to the data we need and will iterate through each object in the array, but how do we add each object into an Excel spreadsheet or some other type of data store?

## Adding a Connector

I initially wanted to store my data in something called [Dataverse](https://learn.microsoft.com/en-us/power-apps/maker/data-platform/data-platform-intro) and added a connector to that. However, the person consuming the data wanted it in Excel, so I ended up adding a connector to **[Excel Online (Business)](https://learn.microsoft.com/en-us/connectors/excelonlinebusiness/)** as well. To do that, I selected the **Add an action** option from the **Apply to each action** and selected **Excel Online (Business)** from the top options that are shown.

&lt;figure&gt;

[![Selecting the Excel Online connector](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-9.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-9.png)

&lt;figcaption&gt;

Selecting the Excel Online connector

&lt;/figcaption&gt;

&lt;/figure&gt;

Next, I entered the following values:

&lt;figure&gt;

[![Entering information for the Excel Online connector](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-10.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-10.png)

&lt;figcaption&gt;

Entering information for the Excel Online connector

&lt;/figcaption&gt;

&lt;/figure&gt;

This uses **OneDrive for Business** so I selected a spreadsheet that I created there named **KR 3 FY23.xlsx** as well as the worksheet name (RepoStats in this example). The values I wanted to store from each object found in the JSON array were then picked. Because a schema was created in the previous Parse JSON step, you can pick the JSON properties you want to assign to your Excel columns for each row. That&apos;s the beautify of having the Parse JSON action generate a schema as mentioned earlier.

## Validating and Testing the Flow

All of the steps needed for my particular scenario are now defined and we&apos;re ready to validate the flow and test it. That can be done by selecting the **Flow checker** (to validate) and **Test** (to try it out) options respectively in the upper-right toolbar:

[![](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-11.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-11.png)

The **Flow checker** will display any errors or warnings in the flow so that you can fix them. The **Test** option allows you to manually start the flow to try it out.

&lt;figure&gt;

[![Testing a flow](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-12.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-12.png)

&lt;figcaption&gt;

Testing a flow

&lt;/figcaption&gt;

&lt;/figure&gt;

After testing it, you can go to the test run and if the flow ran successfully you&apos;ll see a message at the top (or an error if it failed):

&lt;figure&gt;

[![View the result of a flow run](/images/blog/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/image-13.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/10/image-13.png)

&lt;figcaption&gt;

View the result of a flow run

&lt;/figcaption&gt;

&lt;/figure&gt;

You can drill down into each action to see the what happened and the data that was involved for each action.

Once the flow was ready to go, I let it run so that the spreadsheet is updated everyday. Someone else within my organization connects to the spreadsheet and brings it into a [Power BI dashboard](https://powerbi.microsoft.com/en-us/) that we can all view.

## Conclusion

While I could&apos;ve written a custom app to perform these different steps, by using Power Automate I was able to quickly schedule the functionality I needed and convert the JSON data into rows within Excel all without writing a single line of code. Although this is a fairly straightforward and arguably simple example, it still saved me a ton of time going this route. With more complex flows that time savings is multiplied and there&apos;s the added benefit of giving other people within your organization permissions to edit the flow as well - even if they&apos;re not a developer.

If you haven&apos;t tried out Power Automate, create a [free trial](https://powerautomate.microsoft.com/en-us/#home-signup) and give it a spin. There are infinite tasks that can be automated using it!</content:encoded></item><item><title>Migrating a Local Node Script to Azure Functions using VS Code</title><link>https://blog.codewithdan.com/migrating-a-local-node-script-to-azure-functions-using-vs-code/</link><guid isPermaLink="true">https://blog.codewithdan.com/migrating-a-local-node-script-to-azure-functions-using-vs-code/</guid><pubDate>Wed, 21 Sep 2022 00:00:00 GMT</pubDate><content:encoded>I have a work project that uses GitHub APIs to access stats about specific repos (views, clones, forks, etc.). It was pretty straightforward to get the project running locally using GitHub&apos;s [Octokit REST package](https://www.npmjs.com/package/@octokit/rest) and with a little work I had a working Node script that could be run to retrieve the data and display it in the console. That was a good start, but the script functionality needed to be consumed by others in my organization as well as by services such as Power Automate. What to do?

While I could easily convert the script into a [Node/Express](https://www.npmjs.com/package/express) API and publish it to [Azure App Service](https://docs.microsoft.com/azure/app-service/overview), I decided to go with [Azure Functions](https://docs.microsoft.com/en-us/azure/azure-functions/functions-develop-vs-code?tabs=nodejs) since when you boil the script down to the basics, its job is to handle a request and return data. It doesn&apos;t need to be constantly accessed so a consumption based model works well.

Here&apos;s the process I went through to convert my local script to an [Azure Function](https://docs.microsoft.com/azure/azure-functions/).

## 1\. Install the Azure Functions Extension for VS Code

&lt;figure&gt;

[![](/images/blog/migrating-a-local-node-script-to-azure-functions-using-vs-code/image-1024x606.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/08/image.png)

&lt;figcaption&gt;

Creating a Function using VS Code and Extensions

&lt;/figcaption&gt;

&lt;/figure&gt;

I wanted to develop the Azure Function locally and knew that the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) could help with that. It allows you to do everything on your local machine and then publish to Azure Functions once you&apos;re ready.

To get started you can:

1. Open [my project](https://github.com/DanWahlin/GitHub-API-Demo) in VS Code.

3. Install the [extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) (I already had it installed, but you&apos;ll want it).

5. Click on **Azure** in the VS Code sidebar.

7. Locate the **Workspace** section and click the **+** icon.

9. Select **Create Function**.

Since I only had a simple Node project at this point, I received the following prompt:

&lt;figure&gt;

[![](/images/blog/migrating-a-local-node-script-to-azure-functions-using-vs-code/image-1.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/08/image-1.png)

&lt;figcaption&gt;

Prompt to create an Azure Functions project.

&lt;/figcaption&gt;

&lt;/figure&gt;

From there I selected the following:

- **Language**: TypeScript

- **Trigger**: HTTP trigger

- **Function Name**: getGitHubRepoStats

- **Authorization level**: Anonymous

I was prompted to overwrite my existing **.gitignore** and **package.json** files. I said &quot;yes&quot; since I only had @octokit/rest in the Node dependencies list. It finished creating the project and displayed the shiny new function in the editor. It added the following into my project (in addition to a few other items):

&lt;figure&gt;

[![](/images/blog/migrating-a-local-node-script-to-azure-functions-using-vs-code/image-2.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/08/image-2.png)

&lt;figcaption&gt;

Files added by the Azure Functions extension.

&lt;/figcaption&gt;

&lt;/figure&gt;

Good progress! Time to get my existing code converted to an Azure Function.

## 2\. Merge the Local Script Code into the Azure Function

My initial script looked like the following:

```typescript
const { Octokit } = require(&quot;@octokit/rest&quot;);
const { v4: uuidv4 } = require(&apos;uuid&apos;);

// Create personal access token (with repo --&gt; public rights) at https://github.com/settings/tokens
let octokit;
let ownersRepos;
let context;
getStats(context);

async function getStats(ctx) {
    context = ctx || { log: console.log }; // Doing this to simulate what&apos;s it like in Azure Functions
    ownersRepos = getRepos();
    context.log(ownersRepos);
    const stats = [];
    for (const repo of ownersRepos) {
        octokit = new Octokit({
            auth: repo.token
        });
        const ownerRepo = {
            owner: repo.owner,
            repo: repo.repo
        }

        const clones = await getClones(ownerRepo);
        const forks = await getTotalForks(ownerRepo);
        const views = await getPageViews(ownerRepo);

        stats.push(getTodayRow(ownerRepo, clones, forks, views));
    }
    context.log(stats);
    return stats;
}

function getRepos() {
    try {
        console.log(context);
        // Need to set env variable GITHUB_REPOS
        // export GITHUB_REPOS=&quot;[ { \&quot;owner\&quot;: \&quot;microsoft\&quot;, \&quot;repo\&quot;: \&quot;MicrosoftCloud\&quot;, \&quot;token\&quot;: \&quot;token_value\&quot; } ]&quot;
        const repos = JSON.parse(process.env[&apos;GITHUB_REPOS&apos;]);
        context.log(&apos;Repos:&apos;, repos);
        return repos;
    }
    catch (e) {
        context.log(e);
        return [];
    }
}

function getTodayRow(ownerRepo, clones, forks, views) {
    const today = new Date();
    const yesterday = new Date(today.getFullYear(), today.getMonth(), today.getDate() - 1)
      .toISOString().split(&apos;T&apos;)[0] + &apos;T00:00:00Z&apos;;

    const todayClonesViewsForks ={
        id: uuidv4(),
        timestamp: yesterday,
        owner: ownerRepo.owner,
        repo: ownerRepo.repo,
        clones: 0,
        forks: forks,
        views: 0
    };
    const todayClones = clones.clones.find(c =&gt; c.timestamp === yesterday);
    const todayViews = views.views.find(v =&gt; v.timestamp === yesterday);
    if (todayClones) {
        todayClonesViewsForks.clones = todayClones.count;
    }
    if (todayViews) {
        todayClonesViewsForks.views = todayViews.count;
    }
    return todayClonesViewsForks;
}

async function getClones(ownerRepo) {
    try {
        // https://docs.github.com/en/rest/metrics/traffic#get-repository-clones
        const { data } = await octokit.rest.repos.getClones(ownerRepo);
        context.log(`${ownerRepo.owner}/${ownerRepo.repo} clones:`, data.count);
        return data;
    }
    catch (e) {
        context.log(`Unable to get clones for ${ownerRepo.owner}/${ownerRepo.repo}. You probably don&apos;t have push access.`);
    }
    return 0;
}

async function getTotalForks(ownerRepo) {
    try {
        // https://docs.github.com/en/rest/repos/forks
        const { data } = await octokit.rest.repos.get(ownerRepo);
        const forksCount = (data) ? data.forks_count : 0;
        context.log(`${ownerRepo.owner}/${ownerRepo.repo} forks:`, forksCount);
        return forksCount
    }
    catch (e) {
        context.log(e);
        context.log(`Unable to get forks for ${ownerRepo.owner}/${ownerRepo.repo}. You probably don&apos;t have push access.`);
    }
    return 0;
}

async function getPageViews(ownerRepo) {
    try {
        // https://docs.github.com/en/rest/metrics/traffic#get-page-views
        const { data } = await await octokit.rest.repos.getViews(ownerRepo);
        context.log(`${ownerRepo.owner}/${ownerRepo.repo} visits:`, data.count);
        return data;
    }
    catch (e) {
        context.log(`Unable to get page views for ${ownerRepo.owner}/${ownerRepo.repo}. You probably don&apos;t have push access.`);
        context.log(e);
    }
    return 0;
}   
```

The next step was to merge my script into the new Azure Function. Since the Azure Functions extension (with my permission) overwrote my **package.json** file, I ran **npm install @octokit/rest** to get the package back into the dependencies list.

At this point I had the following function code displayed in VS Code:

```typescript
import { AzureFunction, Context, HttpRequest } from &quot;@azure/functions&quot;

const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise&lt;void&gt; {
    context.log(&apos;HTTP trigger function processed a request.&apos;);
    const name = (req.query.name || (req.body &amp;&amp; req.body.name));
    const responseMessage = name
        ? &quot;Hello, &quot; + name + &quot;. This HTTP triggered function executed successfully.&quot;
        : &quot;This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.&quot;;

    context.res = {
        // status: 200, /* Defaults to 200 */
        body: responseMessage
    };

};

export default httpTrigger;
```

Now that I had the shell created for the function, I created a new **getStats.ts** script in the **getGitHubRepoStats** function folder, copied in my initial code, and changed require statements to import statements at the top of the file. It looked like the following after finishing a few &quot;tweaks&quot;:

```typescript
import { Octokit } from &apos;@octokit/rest&apos;;
import { v4 as uuidv4 } from &apos;uuid&apos;;

// Create personal access token (with repo --&gt; public rights) at https://github.com/settings/tokens
let octokit: Octokit;
let ownersRepos;
let context;

export async function getStats(ctx) {
    context = ctx || { log: console.log };
    ownersRepos = getRepos();
    const stats = [];
    for (const repo of ownersRepos) {
        octokit = new Octokit({
            auth: repo.token
        });
        const ownerRepo = {
            owner: repo.owner,
            repo: repo.repo
        }
        const clones = await getClones(ownerRepo);
        const forks = await getTotalForks(ownerRepo);
        const views = await getPageViews(ownerRepo);

        const yesterdayRow = getTodayRow(ownerRepo, clones, forks, views);
        stats.push(yesterdayRow);
    }

    return stats;
}

function getRepos() {
    try {
        const repos = JSON.parse(process.env[&apos;GITHUB_REPOS&apos;]);
        context.log(&apos;Repos:&apos;, repos);
        return repos;
    }
    catch (e) {
        context.log(e);
        return [];
    }
}

function getTodayRow(ownerRepo, clones, forks, views) {
    const today = new Date();
    const yesterday = new Date(today.getFullYear(), today.getMonth(), today.getDate() - 1)
      .toISOString().split(&apos;T&apos;)[0] + &apos;T00:00:00Z&apos;;

    const todayClonesViewsForks ={
        id: uuidv4(),
        timestamp: yesterday,
        owner: ownerRepo.owner,
        repo: ownerRepo.repo,
        clones: 0,
        forks: forks,
        views: 0
    };
    const todayClones = clones.clones.find(c =&gt; c.timestamp === yesterday);
    const todayViews = views.views.find(v =&gt; v.timestamp === yesterday);
    if (todayClones) {
        todayClonesViewsForks.clones = todayClones.count;
    }
    if (todayViews) {
        todayClonesViewsForks.views = todayViews.count;
    }
    return todayClonesViewsForks;
}

async function getClones(ownerRepo) {
    try {
        // https://docs.github.com/en/rest/metrics/traffic#get-repository-clones
        const { data } = await octokit.rest.repos.getClones(ownerRepo);
        context.log(`${ownerRepo.owner}/${ownerRepo.repo} clones:`, data.count);
        return data;
    }
    catch (e) {
        context.log(`Unable to get clones for ${ownerRepo.owner}/${ownerRepo.repo}. You probably don&apos;t have push access.`);
    }
    return 0;
}

async function getTotalForks(ownerRepo) {
    try {
        // https://docs.github.com/en/rest/repos/forks
        const { data } = await octokit.rest.repos.get(ownerRepo);
        const forksCount = (data) ? data.forks_count : 0;
        context.log(`${ownerRepo.owner}/${ownerRepo.repo} forks:`, forksCount);
        return forksCount
    }
    catch (e) {
        context.log(e);
        context.log(`Unable to get forks for ${ownerRepo.owner}/${ownerRepo.repo}. You probably don&apos;t have push access.`);
    }
    return 0;
}

async function getPageViews(ownerRepo) {
    try {
        // https://docs.github.com/en/rest/metrics/traffic#get-page-views
        const { data } = await await octokit.rest.repos.getViews(ownerRepo);
        context.log(`${ownerRepo.owner}/${ownerRepo.repo} visits:`, data.count);
        return data;
    }
    catch (e) {
        context.log(`Unable to get page views for ${ownerRepo.owner}/${ownerRepo.repo}. You probably don&apos;t have push access.`);
        context.log(e);
    }
    return 0;
}
```

Next, I went into the **getGitHubRepoStats/index.ts** file, imported the **getStats.ts** script, and modified the body. Using this approach keeps the function nice and clean.

```typescript
import { AzureFunction, Context, HttpRequest } from &apos;@azure/functions&apos;;
import { getStats } from &apos;./getStats&apos;;

const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise&lt;void&gt; {
    context.log(&apos;HTTP trigger function processed a GitHub repo stats request.&apos;);
    const stats = await getStats(context);
    context.log(&quot;The stats&quot;, stats);
    context.res = {
        body: stats
    };
};

export default httpTrigger;
```

I pressed F5 which then prompted me to install the &quot;core&quot; tools. After the installation completed, it showed several commands in the console, displayed the core tools version, built the code, and launched my new function locally. I hit the **http://localhost:7071/api/getGitHubRepoStats** URL shown in the console and....drumroll please....it actually worked! Getting projects to work the first time is rare for me so it was nice to have a quick &quot;win&quot; for once.

## 3\. Create a Function App in Azure

Now that the function was working locally it was time to deploy it to Azure. I stopped my debugging session, went to the command pallet (shift+cmd+p on Mac), and selected **Azure Functions: Create Function App in Azure**.

&lt;figure&gt;

[![](/images/blog/migrating-a-local-node-script-to-azure-functions-using-vs-code/image.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/09/image.png)

&lt;figcaption&gt;

Using the Azure Functions: Create Function App in Azure Option in VS Code

&lt;/figcaption&gt;

&lt;/figure&gt;

Once you select that option, you&apos;ll be prompted for:

- The Azure subscription to use

- The function name

- The runtime stack (I selected Node.js 16 LTS)

- The region

## 4\. Deploy the Azure Function Code

Once the Azure Function App is created you&apos;ll see a message about viewing the details. The next step is to deploy the code. That can be done by going back to the command pallet in VS Code and selecting **Azure Functions: Deploy to Function App**. You&apos;ll be asked to select your subscription and Function App name.

Once the function is created in Azure you can go to the Azure extension in VS Code, expand your subscription, expand your Function App, right-click on the function and select **Browse Website**. Add &quot;/api/&lt;your\_function\_app\_name&gt;&quot; to the URL and if all of the planets align, you should see data returned from your function.

&lt;figure&gt;

[![](/images/blog/migrating-a-local-node-script-to-azure-functions-using-vs-code/image-2.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/09/image-2.png)

&lt;figcaption&gt;

Using the Azure VS Code extension to browser your Azure Functions website

&lt;/figcaption&gt;

&lt;/figure&gt;

## 5\. Environment Variables and Key Vault

You might have noticed that the function code relies on an environment variable named **GITHUB\_REPOS**. I added that key and value into the **Values** property of the **local.settings.json** file which is used when running the function locally (that file isn&apos;t checked into source control).

```json
{
  &quot;IsEncrypted&quot;: false,
  &quot;Values&quot;: {
    &quot;AzureWebJobsStorage&quot;: &quot;&quot;,
    &quot;FUNCTIONS_WORKER_RUNTIME&quot;: &quot;node&quot;,
    &quot;GITHUB_REPOS&quot;: &quot;[ { \&quot;owner\&quot;: \&quot;microsoft\&quot;, \&quot;repo\&quot;: \&quot;MicrosoftCloud\&quot;, \&quot;token\&quot;: \&quot;token-value\&quot; }, { \&quot;owner\&quot;: \&quot;microsoft\&quot;, \&quot;repo\&quot;: \&quot;brainstorm-fluidframework-m365-azure\&quot;, \&quot;token\&quot;: \&quot;token-value\&quot; } ]&quot;
  }
}
```

I could deploy the function and have the **GITHUB\_REPOS** value show up automatically in the **Configuration --&gt; Application Settings** section of the Function App (you&apos;ll see that section in the Azure Portal). In my case that wasn&apos;t good enough though. The **GITHUB\_REPOS** value has GitHub personal access tokens in it that are used to make the API calls. I needed a more secure solution when I ran the function in Azure.

To handle that, I created a new Azure Key Vault secret that included the data required for the **GITHUB\_REPOS** environment variable. I then went into **Configuration --&gt; Application Settings** in the Function App and ensured that it had the following key/value pair:  
  

```properties
GITHUB_REPOS=@Microsoft.KeyVault(SecretUri=https://&lt;your_key_vault_name&gt;-vault.vault.azure.net/secrets/&lt;your_secret_name&gt;/)
```

To get the Function App to successfully talk with Azure Key Vault and retrieve the secret, you&apos;ll also need to create a managed identity. You can [find details about that process here](https://learn.microsoft.com/en-us/azure/app-service/app-service-key-vault-references).

## Conclusion

Migrating a custom script to Azure Functions is a fairly straightforward process especially if you&apos;re able to reuse a lot of your original code. In my case, it allowed me to expose the local script functionality to anyone and any app. While this particular function is publicly accessible, it&apos;s important to mention that you can also [secure your functions](https://learn.microsoft.com/en-us/azure/azure-functions/security-concepts) as needed.

Is that the end of the story? Not for me. I also needed to create a Power Automate flow to consume the data from the function and update a data store. That&apos;s a subject for another post though.  
  
The code shown in this repo can be found here: [https://github.com/DanWahlin/github-repo-stats](https://github.com/DanWahlin/github-repo-stats).

What&apos;s Next? The next post in this series titled [Use Power Automate to Retrieve Data from an Azure Function for Reporting](https://blog.codewithdan.com/use-power-automate-to-retrieve-data-from-an-azure-function-for-reporting/) demonstrates how to automate calling the Azure Function and storing the data.</content:encoded></item><item><title>New Video Series: All Things Microsoft Cloud</title><link>https://blog.codewithdan.com/new-video-series-all-things-microsoft-cloud/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-video-series-all-things-microsoft-cloud/</guid><pubDate>Wed, 24 Aug 2022 00:00:00 GMT</pubDate><content:encoded>[![All Things Microsoft Cloud - Ayca Bas and Dan Wahlin](/images/blog/new-video-series-all-things-microsoft-cloud/2022-08-24_11-23-45-1024x528.webp)](https://blog.codewithdan.com/wp-content/uploads/2022/08/2022-08-24_11-23-45.png)

I had a chance to sit down with my colleague [Ayça Baş](https://twitter.com/aycabs) as well as several special guests to talk about how different technologies across the Microsoft Cloud can be integrated together to build a variety of applications. Check out the different interviews in the video series below.

## What is the Microsoft Cloud?

Ayça and I talk about the overall Microsoft Cloud and services that are available. While [Azure](https://docs.microsoft.com/azure/developer/) is a central part of the Microsoft Cloud, you can also integrate with services across [Microsoft 365](https://docs.microsoft.com/microsoft-365/?view=o365-worldwide), [Power Platform](https://docs.microsoft.com/power-platform/), and [GitHub](https://github.com). Ayça and I also discuss a new [Build applications on the Microsoft Cloud](https://docs.microsoft.com/azure/architecture/guide/microsoft-cloud/overview) document that walks IT leaders, architects, and developers through the options available to leverage everything the Microsoft Cloud has to offer.

https://youtu.be/SfAy0f2ir5k

##   
Microsoft Cloud and Microsoft Graph

In this video we talk with [Yina Arenas](https://twitter.com/yina_arenas) about the role of Microsoft 365 and Microsoft Graph in the overall Microsoft Cloud. Yina shares the story of how Microsoft Graph was created and discusses the powerful APIs it offers to enable developers to integrate Microsoft 365 (and other) data into their applications.

https://youtu.be/MXq-M6qRffE

## Microsoft Cloud and Power Platform

In this video we talk about the role of Power Platform in the overall Microsoft Cloud. [April Dunnam](https://twitter.com/aprildunnam) discusses how to get started with Power Platform, what fusion development is, how to integrate with different APIs using connectors and Azure API Management, the VS Code extension for Power Platform, and more.

https://youtu.be/z19QZwmy1yg

## Microsoft Cloud and Microsoft 365/Microsoft Teams

In this video we talk about the role of Microsoft 365 and Microsoft Teams in the overall Microsoft Cloud. [Bob German](https://twitter.com/Bob1German) discusses how to get started building apps for Microsoft Teams using Power Platform and how custom apps and services can be built using the Teams Toolkit. He also shares information about a Microsoft Teams App Camp workshop that developers can take to dive in deeper.

https://youtu.be/777ypUr2hwA

## Microsoft Cloud and Accessibility

Accessibility plays a prominent role across Microsoft Cloud services. In this video we talk with [Dona Sarkar](https://twitter.com/donasarkar) about different accessibility features built-into cloud services and tools that developers can utilize to increase accessibility in their custom applications.

https://youtu.be/h0gByFYotZw

## Microsoft Cloud and GitHub

In this video we talk with [Todd Anglin](https://twitter.com/ToddAnglin) about how GitHub fits into the overall Microsoft Cloud and some of the technologies you can use to simplify integration with Azure.

https://youtu.be/a7zwFpkPoy0</content:encoded></item><item><title>Video: Show a user&apos;s emails in an ASP.NET Core app using Microsoft Graph</title><link>https://blog.codewithdan.com/video-show-a-users-emails-in-an-asp-net-core-app-using-microsoft-graph/</link><guid isPermaLink="true">https://blog.codewithdan.com/video-show-a-users-emails-in-an-asp-net-core-app-using-microsoft-graph/</guid><pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate><content:encoded>I&apos;ve been working a lot with [.NET Core](https://dot.net) and [Microsoft Graph](https://aka.ms/ms-graph-docs) lately and decided to put together a short video based on a [Microsoft Learn module](https://aka.ms/learn-msgraph-email) covering how the technologies can be used together. If you haven&apos;t used Microsoft Graph before, it provides a secure, unified API to access organizational data and intelligence (data stored in Microsoft 365 for example).  
  
So why would you ever want to access a signed in user&apos;s emails and include them in your custom app? The simple answer is, &quot;Bring organizational data where your users need it everyday!&quot;. Instead of users switching from your app to find a relevant email, calendar event, Microsoft Teams chat (and more) by jumping between various productivity apps, you can pull that type of data directly into your custom app. This allows users to work more efficiently and make more informed decisions all while minimizing context shifts.

In this video I&apos;ll introduce you to:

- The role of security in making Microsoft Graph calls.
- Microsoft Identity and Microsoft Graph Middleware configuration.
- The role of permissions/scopes and access tokens.
- The Microsoft Graph .NET Core SDK and how it can be used.
- How to create reusable classes that handle making Microsoft Graph calls.
- Dependency injection and how it can be used to access a GraphServiceClient object.
- How to retrieve a batch of email messages using the UserMessagesCollectionRequest class.

## Show a user&apos;s emails in an ASP.NET Core app using Microsoft Graph

https://www.youtube.com/watch?v=acnFrkBL1kE</content:encoded></item><item><title>Start Learning TypeScript with these Short Videos</title><link>https://blog.codewithdan.com/start-learning-typescript-with-these-short-videos/</link><guid isPermaLink="true">https://blog.codewithdan.com/start-learning-typescript-with-these-short-videos/</guid><pubDate>Mon, 10 Jan 2022 00:00:00 GMT</pubDate><content:encoded>[TypeScript](https://www.typescriptlang.org/) continues to grow in popularity and for good reason. It adds &quot;guard rails&quot; to your code to help you spot issues early on, easily locate problem code, enhance productivity, provide consistency across code, and much more. While there are a lot of [TypeScript resources](https://www.typescriptlang.org/docs/) out there to get started learning the language, where can you go to get started quickly without wasting a lot of time?  
  
I recently published a series of short videos on TypeScript core concepts that can provide a great starting point. The videos are short, super focused, and many of them use the online [TypeScript Playground](https://www.typescriptlang.org/play) to demonstrate different concepts. There are a few videos on getting started working with TypeScript locally on your machine as well.

Here&apos;s more information about each video.

## 1\. Why Learn TypeScript?

Is it worth your time to learn TypeScript? Short answer (in my opinion anyway) is YES! In this video I&apos;ll walk through 5 reasons learning TypeScript is worth the effort. Since these videos are intended to be short I could only cover 5, but there are many additional reasons as well!

https://youtu.be/5S96t9kLC5w

## 2\. Adding TypeScript to a VS Code Project

How do you get started using TypeScript, writing, and building your code? I&apos;ll walk you through the basics of that process in this video.

https://youtu.be/vtpM7ght-7s

## 3\. How to Add WebPack to a TypeScript Project

WebPack&apos;s scary right? Well, truth be told it can be intimidating at times, but it&apos;s pretty easy to use it in TypeScript projects. I&apos;ll walk you through the process in this video.

https://youtu.be/ILEX6mKgB2E

## 4\. Getting Started with TypeScript Types

It&apos;s no secret that TypeScript adds &quot;strong typing&quot; into your code (they call it TypeScript for a reason). In this video I&apos;ll explain the primitive data types available and show how you can get started using them.

https://youtu.be/BdRFqLru3Z8

## 5\. Using Classes in TypeScript

Classes are a feature available in JavaScript that can be used to encapsulate your code. They&apos;re not needed for every type of project, but it&apos;s good to know what they&apos;re capable of. In this video I&apos;ll introduce classes and show how they can be used in TypeScript.

https://youtu.be/A4V1sU3p0S4

## 6\. Using Interfaces in TypeScript

In an earlier video I introduced the concept of TypeScript types. In this video, I walk you through how you can use interfaces to build custom types and explain why you may want to do that. Interfaces are &quot;code contracts&quot; that can be used to describe the &quot;shape&quot; of an object, drive consistency across objects, and more.

https://youtu.be/dzfCgPFJyr4

## 7\. Using Generics with TypeScript

Generics are &quot;code templates&quot; that can be reused in your code base. In this video I introduce the concept of generics and show simple examples of how they can be used in TypeScript.

https://youtu.be/nmCKKIxebJc

Are there more topics that I could have covered? Yep - there&apos;s always more. However, these videos should provide you with a solid starting point to understand core concepts and features. There are a lot of additional resources out there to learn TypeScript (you can start with the [docs](https://www.typescriptlang.org/docs/) or the [handbook](https://www.typescriptlang.org/docs/handbook/intro.html)), but I hope these short videos help get you started quickly. I&apos;m personally a huge fan of TypeScript and highly recommend making time to learn it.

If you&apos;d like to dive into more details about TypeScript fundamentals, check out the [TypeScript Fundamentals video course on Pluralsight](https://pluralsight.pxf.io/a1LxaZ) that [John Papa](https://twitter.com/john_papa) and I created.</content:encoded></item><item><title>Error Installing Deno on Windows 11 using PowerShell 7.2 (and how I got it working)</title><link>https://blog.codewithdan.com/error-installing-deno-on-windows-11-using-powershell-7-2/</link><guid isPermaLink="true">https://blog.codewithdan.com/error-installing-deno-on-windows-11-using-powershell-7-2/</guid><pubDate>Mon, 03 Jan 2022 00:00:00 GMT</pubDate><content:encoded>I&apos;ve been playing around with [Deno](https://deno.land/) lately and wanted to get it installed on a new Windows 11 laptop I bought. To install Deno, you can go to the [https://deno.land/#installation](https://deno.land/#installation) page and follow the instruction for your operating system. I&apos;m currently using Powershell 7.2 so I tried the command that was suggested since it&apos;s normally a quick and easy install:

```powershell
iwr https://deno.land/install.ps1 -useb | iex
```

That led to the following error:

```powershell
SetValueInvocationException: Exception setting &quot;SecurityProtocol&quot;: &quot;The requested security protocol is not supported.&quot;
```

After reading a [few posts](https://stackoverflow.com/a/48030563) and an [issue on Github](https://github.com/denoland/deno_install/issues/191#issuecomment-915993531) the suggested fix looked to be the following. But, that didn&apos;t work for me:

```powershell
[Net.ServicePointManager]::SecurityProtocol = &quot;tls12, tls11, tls&quot;
```

After scanning the [Github issue](https://github.com/denoland/deno_install/issues/191#issuecomment-915993531) more, I finally found a curl command that ended up working correctly with PowerShell. Problem solved (finally)!

```powershell
curl.exe -fsSL https://deno.land/x/install/install.ps1 | out-string | iex
```

Hopefully this helps someone else who gets stuck on the issue. If anyone knows why the _SecurityProtocol_ command didn&apos;t fix it, please leave a comment. I&apos;d be interested in knowing how to get the _iwr_ working correctly since that&apos;s the default suggestion on the Deno installation page.</content:encoded></item><item><title>Developing Real-Time Collaborative Apps with Azure, Microsoft 365, Power Platform, and Github</title><link>https://blog.codewithdan.com/developing-real-time-collaborative-apps-with-azure-microsoft-365-power-platform-and-github/</link><guid isPermaLink="true">https://blog.codewithdan.com/developing-real-time-collaborative-apps-with-azure-microsoft-365-power-platform-and-github/</guid><pubDate>Fri, 15 Oct 2021 00:00:00 GMT</pubDate><content:encoded>&lt;figure&gt;

[![](/images/blog/developing-real-time-collaborative-apps-with-azure-microsoft-365-power-platform-and-github/2021-11-09_10-31-05.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/11/2021-11-09_10-31-05.png)

&lt;figcaption&gt;

Learn Together: Developing Real-Time Collaborative Apps

&lt;/figcaption&gt;

&lt;/figure&gt;

Have you considered adding real-time collaboration into your apps? Do you want to learn how to collaborate more efficiently on code your team is writing?

In today’s distributed work environment there are many new and exciting collaborative technologies available across **Azure, Microsoft 365, Power Platform, and GitHub** that you can tap into today. These technologies can be used to increase user productivity as well as developer productivity and take your applications to the next level!

For example, you can:

- Allow users to collaborate on data in real-time within your application using technologies such as the Fluid Framework or SignalR.
- Add real-time chat, audio, and video capabilities into your application using Azure Communication Services.
- Integrate business data into your app including user presence information by using Microsoft 365 and Azure.
- Integrate your app with collaboration hubs such as Microsoft Teams.
- Collaborate on code more efficiently using new technologies available in GitHub.

Videos from the **Developing Real-time Collaborative Apps** event are now available to help you learn about implementing collaborative scenarios in your own apps!

The videos cover:

- **What is collaboration-first development?** - [Dan Wahlin](https://twitter.com/danwahlin) and [April Dunnam](https://twitter.com/aprildunnam) discuss scenarios where real-time collaboration can be used in applications.  
    
- **Adding real-time data into your apps** - [Dan Wahlin](https://twitter.com/danwahlin) and [Dan Roney](https://twitter.com/DanRoney10) talk about the [Fluid Framework](https://fluidframework.com/?WT.mc_id=m365-49097-dwahlin) and [Azure Fluid Relay](https://docs.microsoft.com/en-us/azure/azure-fluid-relay/?WT.mc_id=m365-49097-dwahlin) for real-time data in apps.  
    
- **Adding real-time communication into your apps** - [Piyali Dey](https://twitter.com/piyali_vancity) and [Reza Jooyandeh](https://twitter.com/rezajooyandeh) discuss [Azure Communication Services](https://docs.microsoft.com/en-us/azure/communication-services/?WT.mc_id=m365-49097-dwahlin) and show how real-time chat and audio/video can be added to apps.  
    
- **Bringing your apps where your users work every day** \- [Ayca Bas](https://twitter.com/aycabs) and Juma George Odhiambo talk about getting real-time data from Microsoft Graph into your applications to show user presence information. Technologies covered include [Microsoft Graph](https://docs.microsoft.com/en-us/graph/change-notifications-delivery?WT.mc_id=m365-49097-dwahlin), [Power Platform](https://docs.microsoft.com/en-us/power-platform/?WT.mc_id=m365-49097-dwahlin), [Microsoft Graph Toolkit](https://docs.microsoft.com/en-us/graph/toolkit/overview?WT.mc_id=m365-49097-dwahlin), [Azure Event Hub](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features?WT.mc_id=m365-49097-dwahlin), [Azure Functions](https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview?WT.mc_id=m365-49097-dwahlin), and [Azure SignalR](https://docs.microsoft.com/en-us/azure/azure-signalr/signalr-overview?WT.mc_id=m365-49097-dwahlin).  
    
- **Enhancing your development collaboration and productivity** - [Burke Holland](https://twitter.com/burkeholland) and [Brigit Murtaugh](https://twitter.com/BrigitMurtaugh) discuss [Github extensions](https://marketplace.visualstudio.com/search?term=github&amp;target=VSCode&amp;category=All%20categories&amp;sortBy=Relevance&amp;WT.mc_id=m365-49097-dwahlin) available in Visual Studio code as well as additional features such as [Codespaces](https://github.com/features/codespaces?WT.mc_id=m365-49097-dwahlin) in Github, [https://github.dev](https://github.dev), and more.

## 1\. **What is collaboration-first development?**

https://www.youtube.com/watch?v=ycfltKAyrDc

##   
2\. **Adding real-time data into your a**pps

https://www.youtube.com/watch?v=LL0ppKbdQYI

## **  
3\. **Adding real-time communication into your apps****

https://www.youtube.com/watch?v=uieQtmGUZ-I

##   
4\. **Bringing your apps where your users work every day**

https://www.youtube.com/watch?v=1V8wpXjr240

##   
5\. **Enhancing your development collaboration and productivity**

https://www.youtube.com/watch?v=CV3F8bJtatE</content:encoded></item><item><title>Getting Started with Azure Static Web Apps</title><link>https://blog.codewithdan.com/getting-started-with-azure-static-web-apps/</link><guid isPermaLink="true">https://blog.codewithdan.com/getting-started-with-azure-static-web-apps/</guid><pubDate>Tue, 18 May 2021 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/getting-started-with-azure-static-web-apps/azure-static-web-apps.webp)

What does it take to deploy a modern JavaScript web app? Your initial response might be, &quot;Copy the files up to the server - how hard could it be?&quot;.

The reality is that deploying modern JavaScript apps is a bit more complicated than simply copying files up to a server. For example, let&apos;s say that you have a Single Page Application (a static web application) built with React, Vue, Angular, or another technology that hits an API, supports user logins and roles, and has to secure specific server-side routes. To deploy the app you&apos;d need to do something like the following at a minimum:

1. Build the application and generate the bundles.
2. Build the APIs (depending upon what technology is used).
3. Setup a server that can host the SPA bundles and run the APIs.
4. If the static web app and APIs are on separate servers, configure CORS or a reverse proxy.
5. Configure SSL on the server.
6. Add a custom domain.
7. Configure a default fallback route so that the static web app&apos;s client-side routes work properly and you don&apos;t get a 404.
8. Deploy the SPA bundles to the server.
9. Deploy the API binaries or scripts up to the server.

Are there any additional considerations to take into account? Definitely! Here are a few additional ones:

1. Create a &quot;staging&quot; environment that mirrors the production environment so that you can do testing and QA before going to production.
2. Integrate user authentication and authorization from a cloud provider or 3rd party and secure application routes.
3. Automate the build process for the static web app and APIs and create a build pipeline.
4. Deploy the static web app to a CDN or to multiple servers around the world.
5. Deploy the app&apos;s APIs to a cluster of servers if they need to handle variable loads.
6. More...

Whew....that is a lot of work! Isn&apos;t it supposed to be easy to deploy a static web app and get it up and running? When you factor in the creation of a &quot;staging&quot; site, authentication/authorization, server configuration, dealing with server-side and client-side routes, global distribution of your app (if required), and other requirements your head can start to spin.

Are there any services out there that can help simplify the process of deploying a static web app and its associated APIs? You always do it yourself using various cloud provider services but you&apos;d have to setup storage, web hosting, APIs, manage build and deployments, SSL certs, custom domains, handle security, and much more. You could also use services like Netlify (https://netlify.com), Firebase ([](https://firebase.google.com/)[https://firebase.google.com](https://firebase.google.com/)), and many others as well.

Fortunately, there&apos;s a new kid/service on the block. Azure Static Web Apps. I already use Azure for all of my deployments so I&apos;m really excited about this new functionality Microsoft is adding to Azure. Let&apos;s look at how it works.

If you&apos;d prefer to watch a video, here&apos;s one I created that also goes through the steps discussed in this post.  
  
**Getting Started with Azure Static Web Apps**

https://www.youtube.com/watch?v=oPqBuLfIXII

## Introducing Azure Static Web Apps

Microsoft announced the Azure Static Web Apps service at their [Build 2020 conference](https://register.build.microsoft.com/?WT.mc_id=m365-28924-dwahlin). I was fortunate to get early access and have been really impressed with the functionality they&apos;re providing. Since then, they&apos;ve made the service &quot;GA&quot; (generally available) and currently support 2 plans. The free plan allows you to get started absolutely free while the standard plan includes all of the free features as well as the ability to customize functionality such as authentication and APIs.

[![](/images/blog/getting-started-with-azure-static-web-apps/swa-plans.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/05/swa-plans.png)

You can view pricing details for the standard plan at [https://azure.microsoft.com/en-us/pricing/details/app-service/static](https://azure.microsoft.com/en-us/pricing/details/app-service/static?WT.mc_id=m365-28924-dwahlin).

Here are the basic steps to get started with Azure Static Web Apps:

1. Push your app code to Github.
2. Sign-in to the Azure Portal, search for &quot;Static Web App&quot;, and select the **Create** button.
3. Fill out the form, sign-in to Github, and select your repository and branch.
4. Define where your app, APIs, and build output are located.
5. Select the **Create** button and watch the magic happen!
6. View your static web app.

Before going through these steps you’ll need to have an Azure account. If you don’t have one you can setup a free trial account at [https://azure.microsoft.com/free](https://aka.ms/azure-free-acct). Let&apos;s walk through each of these steps.

## Step 1. Push Your App Code to Github

![](/images/blog/getting-started-with-azure-static-web-apps/github_logo-300x64.webp)

If you&apos;re already using Github to store your code then this first step is the easiest of all. If you&apos;re new to Github check out [how to started using it.](https://help.github.com/en/github/getting-started-with-github/?WT.mc_id=m365-28924-dwahlin) Believe it or not, once your static web app is on Github and your app is ready to try out, the hard part is done!

If your app has APIs that you want to host in Azure then you can use [Azure Functions](https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview/?WT.mc_id=m365-28924-dwahlin) (Node.js 12, .NET Core 3.1, or Python 3.8 are supported - [check the docs](https://docs.microsoft.com/en-us/azure/static-web-apps/apis/?WT.mc_id=m365-28924-dwahlin) for updates). You can use Azure Static Web Apps without any APIs at all of course. Maybe you have a truly static web app that doesn’t need to call out to a server for data. Or, if your app does have APIs and they’re hosted somewhere else that’s fine too. They’re flexible!

If your app does have APIs that you want to host in Azure and you&apos;re new to Azure Functions, here&apos;s a quick overview of what they are and what you can do with them. Azure Functions provide a &quot;serverless&quot; environment for hosting a variety of APIs that can serve data over HTTP or integrate with other Azure services. An Azure Function written with JavaScript consists of an **index.js** file that contains your code as well as a **function.json** file that defines the inputs and outputs for the function. Here&apos;s an example of a function that is triggered by an HTTP request and returns JSON: function that is triggered by an HTTP request and returns JSON:

```javascript
const customers = require(&apos;../data/customers.json&apos;);

module.exports = async function (context, req) {
    context.res = {
        headers: {
          &apos;Content-Type&apos;: &apos;application/json&apos;    
        },
        body: customers
    };
}
```

The input and output bindings (the type of data that flows in and out of the function) can be defined in the function.json file. Here&apos;s an example of input/output bindings for the previous function:

```json
{
  &quot;bindings&quot;: [
    {
      &quot;authLevel&quot;: &quot;anonymous&quot;,
      &quot;type&quot;: &quot;httpTrigger&quot;,
      &quot;direction&quot;: &quot;in&quot;,
      &quot;name&quot;: &quot;req&quot;,
      &quot;methods&quot;: [
        &quot;get&quot;
      ],
      &quot;route&quot;: &quot;customers/&quot;
    },
    {
      &quot;type&quot;: &quot;http&quot;,
      &quot;direction&quot;: &quot;out&quot;,
      &quot;name&quot;: &quot;res&quot;
    }
  ]
}
```

This function is triggered by an HTTP GET request to **https://\[yourserver.com\]/api/customers**. The incoming request object is named **req**. The function returns a response using an object named **res**. Although a complete discussion of Azure Functions is outside the scope of this post, they&apos;re really powerful and definitely worth looking into more.

Once your static web app and Azure Functions APIs are up on Github, you&apos;re ready to create a static web app service in Azure. Let&apos;s look at that process.

## Step 2. Sign-in to the Azure Portal, Search for &quot;Static Web Apps&quot;, and click the Create button

&lt;figure&gt;

![](/images/blog/getting-started-with-azure-static-web-apps/2020-05-15_22-35-39.webp)

&lt;figcaption&gt;

Searching for the Static Web Apps resource in the Azure portal.

&lt;/figcaption&gt;

&lt;/figure&gt;

Visit [https://portal.azure.com](https://portal.azure.com/?WT.mc_id=m365-28924-dwahlin), sign-in, and use the search box at the top to locate the **Static Web Apps** service. Select it to get to the service&apos;s information page. Take a few moments to read about what the service offers and when you&apos;re ready, click the **Create** button to get started.

## Step 3. Fill Out the Form, Sign-in to Github, and Select Your Repository

In this step you&apos;ll fill out the Static Web Apps form and sign-in to Github to select your repository. Here are the fields to fill out:

- Select your Azure subscription.
- Create or select a [Resource Group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal/?WT.mc_id=m365-28924-dwahlin) (a container that holds related resources such as your new static web app functionality)
- Name your app.
- Select a region.
- Select a SKU/plan.
- Sign-in to Github and select your org, repo, and branch. 

Once you&apos;re done filling out the form click the **Next: Build &gt;** button.

## Step 4: Define Where Your App, APIs, and Build Output are Located

&lt;figure&gt;

[![](/images/blog/getting-started-with-azure-static-web-apps/create-swa-app-portal.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/05/create-swa-app-portal.png)

&lt;figcaption&gt;

Create a Static Web App

&lt;/figcaption&gt;

&lt;/figure&gt;

The next step is to define where your app is located in the repository, where your Azure Functions APIs are located, and the directory where your build artifacts (your bundles) are located. You can even preview the workflow file that will be added to your Github repository.

[![](/images/blog/getting-started-with-azure-static-web-apps/swa-build-details.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/05/swa-build-details.png)

After entering that information click the **Review + create** button.

**WARNING:** Make sure you enter correct values for your root app location (where your package.json file lives), API location (where your Azure function APIs live if you&apos;re publishing them to Azure), and whatever directory your app builds to (note that this is relative to where your app location is) because this will NOT work otherwise. I&apos;ll provide more information about checking out your build status later in this post.

## 5\. Click the Create Button and Watch the Magic Happen!

![](/images/blog/getting-started-with-azure-static-web-apps/coffee-1024x684.webp)

It&apos;s time to launch your static web app! Review the summary information provided and then click the **Create** button. Go grab a coffee, kick back, relax, and watch a (super short) YouTube video while a [Github Action](https://github.com/features/actions/?WT.mc_id=m365-28924-dwahlin) builds your code and deploys it to Azure automatically.

[![](/images/blog/getting-started-with-azure-static-web-apps/swa-review-portal-2.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/05/swa-review-portal-2.png)

## Step 6: View Your Static Web App

OK - coffee time&apos;s over! Once your static web app is created, click the **Go to resource** button.

![](/images/blog/getting-started-with-azure-static-web-apps/2020-05-14_18-41-25.webp)

Go to your newly created static web app and click the site&apos;s URL to view it.

![](/images/blog/getting-started-with-azure-static-web-apps/2020-05-14_18-43-41-2-1024x560.webp)

If you&apos;d like to see the build in action on Github, click the **blue arrow** above the site&apos;s URL (note that this will disappear after a while) or the **GitHub Action runs** link: 

![](/images/blog/getting-started-with-azure-static-web-apps/2020-05-14_18-43-41-1-1024x560.webp)

Here&apos;s an example of what the Github Action build created in your repository by Azure Static Web App looks like. This repository is located at [https://github.com/DanWahlin/Angular-JumpStart](https://github.com/DanWahlin/Angular-JumpStart) if you want to try one out that already has an app and functions available. Every time you push code to your chosen repository branch the Github Action build process kicks off automatically.

![](/images/blog/getting-started-with-azure-static-web-apps/2020-05-14_22-23-29-1024x624.webp)

![](/images/blog/getting-started-with-azure-static-web-apps/2020-05-14_22-27-48-1024x664.webp)

Let&apos;s look at an app that was deployed to Azure Static Web Apps.

## Deploying a Static Web App and API

To try out the new Azure Static Web Apps functionality I decided to use a sample Angular project I have on Github called [Angular Jumpstart](https://github.com/DanWahlin/Angular-JumpStart). It originally relied on Node.js/Express for the back-end APIs and is easy to get going locally. See the **README.md** file for more details on getting it running on your machine.

The challenge with the original version of the app was that I needed to understand what Azure Static Web Apps wanted and I needed to convert my existing Node.js RESTful APIs into [Azure Functions](https://azure.microsoft.com/en-us/services/functions/?WT.mc_id=m365-28924-dwahlin) (I no longer need a server to host the APIs on which is great!). You can find [documentation on the conversion process](https://docs.microsoft.com/en-us/learn/modules/shift-nodejs-express-apis-serverless/?WT.mc_id=m365-28924-dwahlin) on the Microsoft docs site if you’re interested.

After getting everything in place I decided there was no time like the present to get things started. I went through the steps listed above to create the Azure Static Web App and my first build started on Github. Looking good....looking good....damn...it failed. I hadn&apos;t read any of the docs at this point but I figured it&apos;d be interesting to see how things went without any modifications (aside from the Node --&gt; Azure Functions conversion).

It turned out that the first build failed due to putting an incorrect path in for the build output (step 4 above). I fixed that, got the build working, and successfully deployed the site. Success....or so I thought. While the shell of the site loaded, none of the API calls worked which meant no data loaded into the web page.

I jumped on a call with my buddy [John Papa](https://twitter.com/john_papa) from Microsoft (fortunately for me he had been spending a lot of time with this new service) and he had me adjust a few things in my functions. The main change was &quot;api&quot; needed to be taken out of the route values in the **function.json** files. For example, **api/customers** was converted to **customers**.

Once that change was made along with a few other minor ones, the site sprung to life. However, if I refreshed the page I got a 404 error because the route was evaluated on the server-side instead of redirecting back to **index.html**. Azure Static Web Apps to the rescue! They have a nice routing solution (which I&apos;m only going to scratch the surface on in this post) that lets you handle the proper redirect to the static web app client-side routes.

You can add a **staticwebapp.config.json** file into the root of your project to handle redirecting:

```json
{   
  &quot;navigationFallback&quot;: {     
    &quot;rewrite&quot;: &quot;/index.html&quot; 
  } 
}
```

Once I added the **staticwebapp.config.json** file the site was rebuilt/redeployed and everything worked as expected. You can get more information about routing, handling client-side redirects, and even securing server-side APIs using authentication/authorization [in the Azure Static Web Apps docs](https://aka.ms/swadocs).

![](/images/blog/getting-started-with-azure-static-web-apps/2020-05-16_11-53-37-1024x713.webp)

Now any time I want to make a change I simply push it up to Github, that kicks off the Github Action build process and deployment, and the change is in production on Azure within a few minutes. Pretty amazing! Feel free to clone the [Angular Jumpstart](https://github.com/DanWahlin/Angular-JumpStart) project and try out the steps shown earlier on your own.

## Conclusion

Although deploying modern JavaScript apps can be pretty challenging when you factor in all of the required tasks, Azure Static Web Apps greatly simplifies the process and makes it a piece of cake to deploy an app once you have it setup and configured.

**NOTE:** If you&apos;re building [Microsoft Teams apps](https://aka.ms/ms-teams-docs) then Azure Static Web Apps can provide a great way to host your Teams app depending on the technology used by the app!

So what&apos;s next? There&apos;s quite a bit more you can do with Azure Static Web Apps such as:

- Add a custom domain to your static web app using the Azure portal.
- Create a staging slot in Azure Static Web Apps to test your app (for example test a pull request) before swapping it over to production.
- Add authentication/authorization using Microsoft, Google, Facebook, Twitter, or Github. You can add users, associate roles with back-end routes and more.
- Although this example shows an Angular application, you can deploy many other app types as well:
    - React
    - Vue
    - Hugo
    - Svelte
    - Gatsby
    - Next.js
    - More...

Take Azure Static Web Apps for a spin and see what you think! Here are some additional links you can visit to learn more. 

- Static Web Apps docs:   
    [https://aka.ms/azure-swa-docs](https://aka.ms/azure-swa-docs)   
    
- Static Web Apps Learn modules (Angular, React, Svelte, or Vue JavaScript app and API):   
    [https://aka.ms/azure-swa-learn](https://aka.ms/swaframeworks)  
    
- Static web app with the Gatsby static site generator:   
    [https://aka.ms/azure-swa-gatsby](https://aka.ms/azure-swa-gatsby)</content:encoded></item><item><title>Getting Started Calling the Microsoft Graph API</title><link>https://blog.codewithdan.com/getting-started-calling-the-microsoft-graph-api/</link><guid isPermaLink="true">https://blog.codewithdan.com/getting-started-calling-the-microsoft-graph-api/</guid><pubDate>Thu, 29 Apr 2021 00:00:00 GMT</pubDate><content:encoded>In this post I&apos;m going to share a quick tip on how to get started calling the [Microsoft Graph API](https://aka.ms/msgraph-overview-docs). If you&apos;re new to Microsoft Graph, here&apos;s a short definition for you:

&gt; Microsoft Graph provides a secure and unified API that can be used to access Microsoft 365 and other cloud data and intelligence.

NOTE: You can watch a video about everything covered here on the [Microsoft 365 Developer YouTube channel](https://aka.ms/m365youtube).

https://www.youtube.com/watch?v=f\_3wc4UgqTI

In a nutshell, you can use Microsoft Graph to retrieve information about users, groups, emails, Teams chats, OneDrive files, meetings, to-do list tasks, and much more and then pull that data into apps where your users work every day.  

&lt;figure&gt;

[![](/images/blog/getting-started-calling-the-microsoft-graph-api/2021-04-29_10-37-20-1024x948.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/04/2021-04-29_10-37-20.png)

&lt;figcaption&gt;

Retrieving Microsoft 365 Data

&lt;/figcaption&gt;

&lt;/figure&gt;

That means if I&apos;m the user using your application, I can login and I see my list of meetings or files related to a particular topic listed right in the app. Or, if I have emails that are related to a particular scenario, I can see my emails after logging in. Of course, this is all done in a secure manner where the user has to consent and give their approval.

Before moving on, it&apos;s important to clarify one thing: Microsoft Graph isn&apos;t graph DB or GraphQL. Microsoft Graph is a RESTful API so the same principles you&apos;ve likely used with other APIs (Web API with .NET, APIs exposed using Express/Node.js, and so forth) can be used.

To get started with Microsoft Graph you can use the [Microsoft Graph Explorer](http://aka.ms/g-explorer). At the Microsoft Graph Explorer website you can see what Microsoft graph APIs are available and practice using them all within the confines of the browser. For example, in the images below you&apos;ll notice there is an area where you can select a sample query to run and an area where the API&apos;s URL is defined. Select the &quot;my profile&quot; option in the **Sample queries** section to get started.

&lt;figure&gt;

[![](/images/blog/getting-started-calling-the-microsoft-graph-api/2021-04-29_13-31-15-1024x563.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/04/2021-04-29_13-31-15.png)

&lt;figcaption&gt;

Microsoft Graph explorer sample queries

&lt;/figcaption&gt;

&lt;/figure&gt;

&lt;figure&gt;

[![](/images/blog/getting-started-calling-the-microsoft-graph-api/2021-04-29_13-32-44-1024x563.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/04/2021-04-29_13-32-44.png)

&lt;figcaption&gt;

Microsoft Graph explore API URLs

&lt;/figcaption&gt;

&lt;/figure&gt;

As mentioned, Microsoft Graph is a RESTful API used to get Microsoft 365 data. For example, I can call into the profile API and get the logged in user&apos;s profile using the following URL (shown above in the previous image):

```bash
https://graph.microsoft.com/v1.0/me
```

When you first go to the Microsoft Graph Explorer you won&apos;t be logged in. That means you&apos;ll be getting anonymous data back for a user&apos;s profile and you&apos;ll see a display name of Megan Bowen is used. You can also get some information about Megan&apos;s email address and some other info as well in her profile. Notice that the data returned is JSON (JavaScript Object Notation) data.

&lt;figure&gt;

[![](/images/blog/getting-started-calling-the-microsoft-graph-api/2021-04-29_13-35-40-1024x563.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/04/2021-04-29_13-35-40.png)

&lt;figcaption&gt;

JSON data returned for a user&apos;s profile.

&lt;/figcaption&gt;

&lt;/figure&gt;

In addition to that, you can run other queries. If you want to do this for Megan (the anonymous user you&apos;ll see) you can leave it as is, but if you want to do it for yourself, you can either log in with your credentials or better yet you could log in with a [free developer tenant](https://developer.microsoft.com/en-us/microsoft-365/dev-program?WT.mc_id=m365-20792-dwahlin) so that you can safely play around.

If you go to the sample queries section and click on &quot;my photo&quot; you&apos;ll see Megan&apos;s photo displayed in the browser. Look at the API URL and you&apos;ll notice the following value displayed:

```bash
https://graph.microsoft.com/v1.0/me/photo/$value
```

If you delete **$value**, you&apos;ll get back JSON data about the image. Notice that it contains height, width, and some other data as well.

&lt;figure&gt;

[![](/images/blog/getting-started-calling-the-microsoft-graph-api/2021-04-29_13-46-06-1024x563.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/04/2021-04-29_13-46-06.png)

&lt;figcaption&gt;

Data returned about a user&apos;s profile image.

&lt;/figcaption&gt;

&lt;/figure&gt;

You can get to Megan&apos;s email messages as well. I want to emphasize again that you have to have a logged in user who has consented to access this information. If you want to display a user&apos;s email messages, they&apos;d have to log in first. If they approve it, you could pull their mail in and display it to them in an app.

For example, maybe you&apos;re working on a sales application right now, and you want to pull in sales specific emails so that they can see what they&apos;ve said in the past to customers. You could filter based on subject, for example, and then display the body of the message to the sales person after they&apos;ve logged in.

You can also retrieve calendar events and more. If you go the sample queries search box (lower left of the screen) and type &quot;calendar&quot; you&apos;ll notice you can retrieve calendar events for the next week or all events. Maybe you want all upcoming events but only want to retrieve a limited amount of data about each event. If you click on the &quot;all events in my calendar&quot; query you can get that data. Looking at the JSON that&apos;s returned you&apos;ll see quite a bit of data displayed.

If you only want the **subject**, **body**, and maybe the **start** and the **end** dates, you can adjust that in the URL by deleting the properties you don&apos;t want. The properties to return in the JSON data are controlled using the **$select** query parameter.

&lt;figure&gt;

[![](/images/blog/getting-started-calling-the-microsoft-graph-api/2021-04-29_13-52-27-1024x563.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/04/2021-04-29_13-52-27.png)

&lt;figcaption&gt;

Changing returned properties using the $select query parameter.

&lt;/figcaption&gt;

&lt;/figure&gt;

Now run the query and you&apos;ll notice that only those properties are returned. You can also add different types of filtering if you&apos;d like. You can add an **&amp;** to the end of the URL and type **$filter** equals &quot;your query here&quot;. Or, you can grab something greater than a date, less than a date, etc. Here&apos;s an example of using **ge** to grab calendar events greater than or equal to a specific date. Notice the use of the **$filter** query parameter.

&lt;figure&gt;

[![](/images/blog/getting-started-calling-the-microsoft-graph-api/2021-04-29_13-57-09-1024x85.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/04/2021-04-29_13-57-09.png)

&lt;figcaption&gt;

Using the $filter query paramter

&lt;/figcaption&gt;

&lt;/figure&gt;

After selecting **Run query** you&apos;ll have the same data, but now it&apos;s filtered and only the specific properties you selected are returned! Additional information about selecting, querying, sorting, and performing other types of operations can be found at [https://aka.ms/msgraph-query-params](https://aka.ms/msgraph-query-params).

## Conclusion

I hope this helps you out as you get started using Microsoft Graph. If you&apos;d like a more hands-on approach to using Graph Explorer and Microsoft Graph in general, check out the free Microsoft Graph Fundamentals learning path available at [https://aka.ms/learn-graph](https://aka.ms/learn-graph).</content:encoded></item><item><title>Azure Communications Voice Calling QuickStart</title><link>https://blog.codewithdan.com/azure-communications-voice-calling-quickstart/</link><guid isPermaLink="true">https://blog.codewithdan.com/azure-communications-voice-calling-quickstart/</guid><pubDate>Mon, 08 Mar 2021 00:00:00 GMT</pubDate><content:encoded>In this post, I&apos;m going to walk you through the process of getting started with adding voice calling into your apps using [Azure Communication Services](https://docs.microsoft.com/en-us/azure/communication-services/overview?WT.mc_id=m365-19887-dwahlin) (ACS). If you haven&apos;t read my [previous post](https://blog.codewithdan.com/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/) or watched my video on [&quot;What is Azure Communication Services?&quot;](https://www.youtube.com/watch?v=SM2Rgyi_0XU) I&apos;d recommend doing that first so that you understand what ACS is all about and the key features it offers.

https://www.youtube.com/embed/SM2Rgyi\_0XU

In a nutshell, ACS allows you to add voice, video, chat, SMS, and other telephony features into your applications. It can be used in web apps, desktop apps, or mobile apps.

The ACS docs have a [Calling QuickStart](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/getting-started-with-calling?WT.mc_id=m365-19887-dwahlin&amp;pivots=platform-web) available that helps get you started which is what I&apos;ll focus on here. Ensure that you have the pre-reqs listed in the Calling QuickStart ready to go:

- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=m365-19887-dwahlin).
- [Node.js](https://nodejs.org/) Active LTS version.
- An active Communication Services resource (more on this below). [Create a Communication Services resource](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/create-communication-resource?WT.mc_id=m365-19887-dwahlin).
- A User Access Token to instantiate the call client (more on this below). Learn how to [create and manage user access tokens](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/access-tokens).

## Step 1: Clone the Calling QuickStart Repo

The [Calling QuickStart](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/getting-started-with-calling?WT.mc_id=m365-19887-dwahlin&amp;pivots=platform-web) has you create all of the code from scratch including getting an access token. If you want to save time, I have a Github project that has all of the tasks already completed. You can [clone this repo](https://github.com/DanWahlin/acs-voice-calling-quickstart?WT.mc_id=m365-19887-dwahlin) (or download and extract the .zip) to get the project on your machine. The steps that follow assume that you&apos;re working with the code from the cloned repo but follow the approach shown in the Calling QuickStart.

## Step 2: Create an ACS Resource in Azure

The first thing that the pre-reqs require is to create an ACS resource in the [Azure Portal](https://portal.azure.com?WT.mc_id=m365-19887-dwahlin) (note that the [Azure CLI](https://blog.codewithdan.com/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/#azure-cli) can also be used). To create the resource, login to the Azure Portal, click on **Create a resource**, and then search for &quot;communication&quot;. You should see **Communication Services** show up. Select it and then select the **Create** button.

Creating an ACS resource in the portal is straightforward and quick. You do the following:

- Select an Azure subscription
- Select a resource group
- Enter a resource name
- Select your data location

[![](/images/blog/azure-communications-voice-calling-quickstart/image-2-1024x678.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/03/image-2.png)

Once you&apos;ve done that select the **Review + Create** button followed by **Create**. Simple right!?

After creating the ACS resource you&apos;ll notice a **View and generate access keys** section on the **Overview** page.

[![](/images/blog/azure-communications-voice-calling-quickstart/image-3.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/03/image-3.png)

Select that and you&apos;ll be taken to a page where you can get information about a connection string. Locate the **first connection string shown** and copy it to your clipboard. You&apos;ll need it later.

[![](/images/blog/azure-communications-voice-calling-quickstart/image-4-915x1024.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/03/image-4.png)

## Step 3: Install Packages

The next part of the [Calling QuickStart](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/getting-started-with-calling?WT.mc_id=m365-19887-dwahlin&amp;pivots=platform-web) has you create the project folder, add npm packages, and more. If you [cloned the repo](https://github.com/DanWahlin/acs-voice-calling-quickstart?WT.mc_id=m365-19887-dwahlin) mentioned earlier you can open a command prompt at the root of the project and run **npm install**. Note that the **README.md** file in the project has this and other steps but I&apos;ll include them in this post as well.

Open **package.json** and notice that it includes several ACS packages in the dependencies:

```json
&quot;dependencies&quot;: {
  &quot;@azure/communication-calling&quot;: &quot;...&quot;,
  &quot;@azure/communication-common&quot;: &quot;...&quot;,
  &quot;@azure/communication-identity&quot;: &quot;...&quot;
},
```

## Step 4: Add the ACS Connection String to an .env File

Create a new file named **.env** in the root of your project. Update it with the following information:

```properties
CONNECTION_STRING=&lt;your_acs_connection_string_goes_here&gt;
```

Ensure that you replace **&lt;your\_acs\_connection\_string\_goes\_here&gt;** with the actual ACS connection string you copied earlier in the Azure Portal.

## Step 5: Explore the index.html and client.js Files

The [Calling QuickStart](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/getting-started-with-calling?WT.mc_id=m365-19887-dwahlin&amp;pivots=platform-web) has you create an **index.html** file and add some basic HTML code into it. You can open the **index.html** file in the Github project to look at it. Notice that there&apos;s an input that is used to collect the &quot;callee&quot; information (the person you want to voice call) as well a two buttons to start a call and hangup on a call.

```markup
&lt;input 
  id=&quot;callee-id-input&quot;
  type=&quot;text&quot;
  placeholder=&quot;Who would you like to call? (use 8:echo123)&quot;
  style=&quot;margin-bottom:1em; width: 300px;&quot;
/&gt;
&lt;div&gt;
  &lt;button id=&quot;call-button&quot; type=&quot;button&quot; disabled=&quot;true&quot;&gt;
    Start Call
  &lt;/button&gt;
  &amp;nbsp;
  &lt;button id=&quot;hang-up-button&quot; type=&quot;button&quot; disabled=&quot;true&quot;&gt;
    Hang Up
  &lt;/button&gt;
&lt;/div&gt;
```

The next file to look at is **client.js**. Open it and notice that it imports a few ACS symbols at the top. This file does the following in the **init()** function:

1. Creates an ACS **CallClient** object.
2. Creates an **AzureCommunicationTokenCredential** that will be used to securely communicate with ACS using a token.
3. Creates a **callAgent** object that handles making the call. Notice that the token is passed to the code that creates callAgent.

```javascript
async function init() {
    const callClient = new CallClient();
    const tokenCredential = new AzureCommunicationTokenCredential(&quot;&lt;your_access_token&gt;&quot;);
    callAgent = await callClient.createCallAgent(tokenCredential);
    callButton.disabled = false;
}
```

Looking down further in the code you&apos;ll notice that two event listeners are defined that attach to the buttons shown in **index.html**. They handle starting a call and hanging up on a call:

```javascript
callButton.addEventListener(&quot;click&quot;, () =&gt; {
    // start a call
    const userToCall = calleeInput.value;
    call = callAgent.startCall(
        [{ communicationUserId: userToCall }],
        {}
    );
    // toggle button states
    hangUpButton.disabled = false;
    callButton.disabled = true;
});

hangUpButton.addEventListener(&quot;click&quot;, () =&gt; {
    // end the current call
    call.hangUp({ forEveryone: true });
  
    // toggle button states
    hangUpButton.disabled = true;
    callButton.disabled = false;
});
```

## Step 6: Get an Access Token

The Calling QuickStart [provides a link to a document](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/access-tokens?pivots=programming-language-javascript&amp;WT.mc_id=m365-19887-dwahlin) that has steps for adding code to create an access token. To save on time, the Github project includes an **issue-access-token.js** script that will handle this task for you. It uses the ACS connection string from the **.env** file you created earlier to retrieve an access token from ACS. Note that the token it generates is a one-time use access token so if you run the application and refresh the page later you&apos;ll have to get a new token.

To get a new access token, open a command prompt at the root of the project and run the following command:

```bash
node issue-access-token.js
```

After running the command a token will be written out to the console. Copy the token to your clipboard, open **client.js**, and replace **&lt;your\_access\_token&gt;** with the value of the token.

**NOTE:** Normally your app will call into a custom backend API to retrieve the access token. In an effort to keep things as simple as possible the Calling QuickStart has you manually generate the token and then copy/paste it into the client.js code. If you&apos;d like to see an example that has the backend API visit the [Calling Hero Demo](https://github.com/Azure-Samples/communication-services-web-calling-hero/?WT.mc_id=m365-19559-dwahlin) on Github. You&apos;ll find the code in the **Calling/Controllers/UserTokenController.cs** file.

## Step 7: Make a Call

Once you have a token, you can start the app and make a voice call. Run the following command from the root of the project:

```bash
npx webpack-dev-server
```

Once the webpack server starts and the bundles are built you can visit http://localhost:8080 in the browser.

[![](/images/blog/azure-communications-voice-calling-quickstart/image-7.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/03/image-7.png)

Enter **8:echo123** in the textbox and start the call. A bot will answer and tell you to record a message. After the beep, say whatever you&apos;d like and once it&apos;s done recording your message will be played back to you. Hangup on the call whenever you&apos;re done.

Go back to your command prompt and press **CTRL+c** to stop the webpack dev server.

## Conclusion

You can see that initiating a voice call using Azure Communication Services is a fairly straightforward process. Although you only talked to a bot, if you had the &quot;communication ID&quot; for an actual user you could talk to them as well with a little more work. Check out the [ACS Github repo](https://github.com/DanWahlin/acs-voice-calling-quickstart?WT.mc_id=m365-19887-dwahlin) for additional demos of calling and the [ACS docs](https://docs.microsoft.com/en-us/azure/communication-services/overview?WT.mc_id=m365-19887-dwahlin) for more information.

You can watch a video walkthrough I created of the [Calling QuickStart](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/getting-started-with-calling?WT.mc_id=m365-19887-dwahlin&amp;pivots=platform-web) below:

https://www.youtube.com/embed/jOxqHIJ5-2E</content:encoded></item><item><title>Add Real-Time Video, Voice, and Chat to Your App with Azure Communication Services</title><link>https://blog.codewithdan.com/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/</link><guid isPermaLink="true">https://blog.codewithdan.com/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/</guid><pubDate>Tue, 02 Mar 2021 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/image-20-1024x609.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/02/image-20.png)

How many times have you tried to contact a company&apos;s customer service department only to waste time looking up the phone number or trying to find the \*right\* phone number to ask a simple question? Once you finally get through to someone you typically end up switching between the phone app and the company&apos;s website or app to pass along required information to the customer service representative. It can be frustrating.

Wouldn&apos;t it be easier to open the company&apos;s website or app and make the call directly from the screen that has all of your information already available? For example, a customer stuck at an airport could open an airline app, initiate a voice or video call (without having to look up the customer support phone number - yeah!), and explore different flight options visually in the app as they talk with the airline representative. In cases where chat (or SMS) work better, a chat could be started directly in the app to get help. It&apos;s a feature that I&apos;ve always wanted in airline apps, insurance apps, HR apps, and many others to simplify the process of getting help. By adding real-time communication directly into an app, a customer can easily access information directly from the app, talk through the problem with customer service, and get a more personalized experience along the way. That&apos;s only one of many scenarios where this technology can be used.

Using voice, video, chat, and SMS isn&apos;t limited to customer service scenarios of course. There are many additional scenarios where employees need to talk or chat with each other as they work within an app or website. For example, a doctor may consult with a patient using an app, an Internet technician working out in the field may need to communicate with headquarters to fix a problem with a customer&apos;s Internet connection, a mechanical engineer may need to be shown new constraints at a job site, or an architect might discuss structural modifications to a building. By integrating real-time communication directly into apps, workers can have the app open on their device, access the information they need to solve the problem, and get the audio/visual or chat help they need as well.

This is where Azure Communication Services (ACS) comes into picture.

## Getting Started with Azure Communication Services

[Azure Communication Service (ACS)](https://docs.microsoft.com/en-us/azure/communication-services/overview/?WT.mc_id=m365-19559-dwahlin) allows you to add &quot;real-time multimedia voice, video, and telephony-over-IP communications features&quot; into your applications. It addition to multimedia, it can also be used to add chat and SMS functionality. Real-time multimedia capabilities can be used in website-to-website scenarios, app-to-app scenarios, and be used in a combination of those options as well. Instead of worrying about setting up and maintaining the network and servers required to support your real-time multimedia needs, you can let ACS handle that for you and stay focused on building app features. ACS is built on the same enterprise services that support [Microsoft Teams](https://docs.microsoft.com/en-us/microsoftteams/teams-overview/?WT.mc_id=m365-19559-dwahlin) (which has over 115 million active users now). In fact ACS is even interoperable with Microsoft Teams.

ACS has several different client libraries and REST APIs that can be used to integrate voice, video, chat, and SMS into an application. Currently the following [languages/platforms](https://docs.microsoft.com/en-us/azure/communication-services/concepts/sdk-options#languages-and-publishing-locations?WT.mc_id=m365-19559-dwahlin) are supported:

- JavaScript
- .NET
- Python
- Java SE
- iOS
- Android

Let&apos;s look at the process of getting ACS setup and walk through an example application that&apos;s available.

## Step 1: Create an Azure Communications Service Resource in the Azure Portal

To get started visit [https://portal.azure.com](https://portal.azure.com?WT.mc_id=m365-19559-dwahlin), login to the portal, and select **Create a resource**. In the **Search the Marketplace** textbox enter **Communication Services** and select that service once it is displayed.

**NOTE:** If you don&apos;t have an Azure account you can [get a free one here](https://azure.microsoft.com/en-us/free/?WT.mc_id=m365-19559-dwahlin).

[![](/images/blog/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/image-19-719x1024.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/02/image-19.png)

You&apos;ll be taken to the ACS resource page where you can select the **Create** button to get started.

To create an ACS resource you&apos;ll select your subscription and resource group names, enter the ACS resource name you&apos;d like to use, and select your data location:

[![](/images/blog/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/image-18-1024x675.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/02/image-18.png)

Select the **Review + Create** button, review your details, and then select the **Create** button to get started.

After the ACS resource is created you&apos;ll see the standard overview page with information about your new ACS resources. Select the **Keys** option in the navigation menu to the left to view the keys and connection strings that can be used to authorize API calls from your client to ACS.

[![](/images/blog/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/image-21.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/02/image-21.png)

Note the Primary key connection string that is listed. You&apos;ll use it in the next step.

**NOTE:** If you prefer to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?WT.mc_id=m365-19559-dwahlin), you can create and show an ACS resource using the following commands:

```bash
# If you don&apos;t have the az communication extension installed run the following:

az extension add --name communication

# Create the Azure Communication Services resource

az communication create --name &quot;&lt;communicationName&gt;&quot; --location &quot;Global&quot; --data-location &quot;United States&quot; --resource-group &quot;&lt;resourceGroup&gt;&quot;

az communication show --name &quot;&lt;communicationName&gt;&quot; --resource-group &quot;&lt;resourceGroup&gt;&quot;
```

## Step 2: Clone the Communication Services Web Calling Hero Example

Now that you have an ACS resource setup in Azure you can clone an example called &quot;Web Calling Hero&quot; from Github. This example uses a .NET Core backend with a React frontend. If you don&apos;t have .NET Core 3.1 or higher installed you can visit [https://dotnet.microsoft.com/download](https://dotnet.microsoft.com/download/?WT.mc_id=m365-19559-dwahlin) to get it installed quickly and easily on Windows, Mac, or Linux. It&apos;s important to note that while this sample uses .NET Core for the backend, there are several other options available as mentioned earlier.

Open a **terminal window,** navigate to the directory where you&apos;d like to clone the example on your machine, and run the following command:

```bash
git clone https://github.com/Azure-Samples/communication-services-web-calling-hero.git
```

If you use [VS Code](http://code.visualstudio.com/?WT.mc_id=m365-19559-dwahlin), **cd** into the new directory that was created and type **code .** to open the editor. There&apos;s no requirement to use VS Code of course so if you prefer another editor open the new directory in it.

Open the **Calling/appsettings.json** file, locate the **ResourceConnectionString** property, and replace the value with the connection string you saw earlier in the ACS **Keys** page in the Azure Portal. Save the file once you&apos;re done.

Open a terminal window in the project&apos;s **Calling** folder and run the following command to restore dependencies and build the project:

```bash
dotnet build
```

**NOTE:** The first time I ran the build I received an error about a missing build assembly which was puzzling. I ran **dotnet clean** but still got the error. I finally ran the build again, got the error, and manually deleted the **bin** and **obj** folders and then things worked fine.

Once the project builds successfully start the server by running the following command:

```bash
dotnet run
```

This will startup the app server and expose port 5001. Visit **https://localhost:5001** in your browser to see the homepage of the application. You should see a page similar to the following.

**NOTE:** If you haven&apos;t created a developer certificate on your machine you may receive a certificate error when first viewing the page in the browser. If that&apos;s the case, stop the server in the terminal window and run **dotnet dev-certs https -t** to add and trust a developer certificate on your machine. Once the cert is created run **dotnet run** again to start the server.

[![](/images/blog/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/image-16-1024x605.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/02/image-16.png)

Click the **Start a call** button to initiate a call and you&apos;ll be taken to another screen that prompts you to allow access to your microphone and camera:

[![](/images/blog/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/image-14-1024x607.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/02/image-14.png)

After allowing access, select the appropriate devices you want to use in the drop-down boxes to the right. After selecting them, toggle your camera and microphone to test things out:

[![](/images/blog/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/image-13-1024x609.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/02/image-13.png)

Once you&apos;re ready, click the **Start call** button.

It takes at least two people to talk of course, but since you&apos;re on localhost you can only invite yourself to a call. It works though for testing purposes. If you click the invite people icon (see below), select the **Copy join info**, paste the link in another tab, and then choose a different camera/mic (if you have one available), you can see how it works.

[![](/images/blog/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/image-12-1024x601.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/02/image-12.png)

Here&apos;s an example of what a call looks like. This call was between myself and some imposter version of myself:

[![](/images/blog/add-real-time-video-voice-and-chat-to-your-app-with-azure-communication-services/image-11-1024x609.webp)](https://blog.codewithdan.com/wp-content/uploads/2021/02/image-11.png)

If you deploy the app to Azure or use a tool like [ngrok](https://ngrok.com) to expose it publicly (for testing purposes) you can invite others to join and have a call directly in the web application. This same type of functionality can be added to your mobile and desktop apps as well!

**NOTE:** If you use [ngrok](https://ngrok.com) to expose **localhost** publicly (for testing purposes) you&apos;ll need to run the following command for it to work properly: **ngrok http https://localhost:5001 -host-header=&quot;localhost:5001&quot;**

So what&apos;s the magic that makes all of this work? The short answer is [Microsoft Teams services](https://docs.microsoft.com/en-us/microsoftteams/teams-overview?WT.mc_id=m365-19559-dwahlin) but that functionality is wrapped up in ACS packages that you can use directly in your apps. Let&apos;s explore a few portions of the code in this sample app.

## Step 3: Exploring the Code

### Backend Code

Let&apos;s start by looking at the backend code. First off, two ACS assemblies are included in the project&apos;s **Calling.csproj** file:

```xml
  &lt;ItemGroup&gt;
    &lt;PackageReference Include=&quot;Azure.Communication.Administration&quot; Version=&quot;...&quot; /&gt;
    &lt;PackageReference Include=&quot;Azure.Communication.Common&quot; Version=&quot;...&quot; /&gt;
    ...
  &lt;/ItemGroup&gt;
```

Open **Calling/Controllers/UserTokenController.cs** and notice the **Azure.Communication.\*** namespaces imported at the top of the file. Looking through the code you&apos;ll notice the following:

- The constructor loads the ACS connection string from the **appsettings.json** file that you explored earlier.

```csharp
public UserTokenController(IConfiguration configuration)
{
    _client = new CommunicationIdentityClient(configuration[&quot;ResourceConnectionString&quot;]);
}
```

- The **GetAsync()** REST API function creates a **CommunicationUser** object, issues a token, and returns a token response. The token is used by the frontend client to connect with ACS services (more on that in a moment).

```csharp
public async Task&lt;IActionResult&gt; GetAsync()
{
    try
    {
        Response&lt;CommunicationUser&gt; userResponse = await _client.CreateUserAsync();
        CommunicationUser user = userResponse.Value;
        Response&lt;CommunicationUserToken&gt; tokenResponse =
            await _client.IssueTokenAsync(user, scopes: new[] { CommunicationTokenScope.VoIP });
        string token = tokenResponse.Value.Token;
        DateTimeOffset expiresOn = tokenResponse.Value.ExpiresOn;
        return this.Ok(tokenResponse);
    } 
    catch (RequestFailedException ex)
    {
        Console.WriteLine($&quot;Error occured while Generating Token: {ex}&quot;);
        return this.Ok(this.Json(ex));
    }
}
```

That&apos;s it for the backend code!

### Frontend Code

The bulk of the application functionality is handled by the frontend code which is written with TypeScript and React and located in the **Calling/ClientApp/src** directory. The app relies on the [FluentUI](https://developer.microsoft.com/en-us/fluentui#/?WT.mc_id=m365-19559-dwahlin) library for some of the base controls, styles, and icons that are used.

Open **package.json** and you&apos;ll see the following ACS packages included:

- @azure/communication-calling
- @azure/communication-common

These packages provide the audio, video and device selection functionality needed by the application. Now let&apos;s explore a few key parts of the frontend codebase.

Remember the token issuer backend API that you looked at earlier? When the frontend app is ready to start a call, the code retrieves the token from the backend API. This is handled by the **Utils/Utils.ts** file:

```typescript
getTokenForUser: async (): Promise&lt;any&gt; =&gt; {
  const response = await fetch(&apos;/userToken&apos;);
  if (response.ok) {
    return response.json();
  }
  throw new Error(&apos;Invalid token response&apos;);
}
```

The token is used in **core/sideEffects.ts** to create an **AzureCommunicationUserCredential** which is then used to create a call agent:

```typescript
  const tokenCredential = new AzureCommunicationUserCredential(userToken);
  let callAgent: CallAgent = await callClient.createCallAgent(tokenCredential);
```

  
The project&apos;s **components** directory has the main components used in the application while the **containers** and **core** directories have supporting files used for state management. Although there are quite a few components running on the frontend, one of the key components is named **GroupCall**. It&apos;s located in the **components/GroupCall.tsx** file.

The **GroupCall** component imports several ACS types and delegates functionality to a child component named **MediaGallery**. **MediaGallery** handles displaying the local and remote video streams for a call.

```jsx
  &lt;Stack.Item grow styles={!activeScreenShare ? activeContainerClassName : hiddenContainerClassName}&gt;
    &lt;MediaGallery /&gt;
  &lt;/Stack.Item&gt;
```

Here&apos;s a snippet from the **MediaGallery** component&apos;s tsx. Notice that it includes **RemoteStreamMedia** and **LocalStreamMedia** child components.

```typescript
const getMediaGalleryTilesForParticipants = (participants: RemoteParticipant[], userId: string, displayName: string) =&gt; {
  // create a RemoteStreamMedia component for every remote participant
  const remoteParticipantsMediaGalleryItems = participants.map((participant) =&gt; (
    &lt;div className={mediaGalleryStyle}&gt;
      &lt;RemoteStreamMedia
        key={utils.getId(participant.identifier)}
        stream={participant.videoStreams[0]}
        label={participant.displayName ?? utils.getId(participant.identifier)}
      /&gt;
    &lt;/div&gt;
  ));

  // create a LocalStreamMedia component for the local participant
  const localParticipantMediaGalleryItem = (
    &lt;div key={userId} className={mediaGalleryStyle}&gt;
      &lt;LocalStreamMedia label={displayName} stream={props.localVideoStream} /&gt;
    &lt;/div&gt;
  );
```

The **LocalStreamMedia** and **RemoteStreamMedia** components handle rendering the appropriate video streams. Here&apos;s a snippet from the **RemoteStreamMedia** component file. It uses an ACS **Renderer** object to handle rendering the appropriate view for the video stream.

```typescript
  const renderStream = async () =&gt; {
    var container = document.getElementById(streamId);

    if (container &amp;&amp; props.stream &amp;&amp; props.stream.isAvailable) {
      setAvailable(true);

      var renderer: Renderer = new Renderer(props.stream);
      rendererView = await renderer.createView({ scalingMode: &apos;Crop&apos; });

      // we need to check if the stream is available still and if the id is what we expect
      if (container &amp;&amp; container.childElementCount === 0) {
        container.appendChild(rendererView.target);
      }
    } else {
      setAvailable(false);

      if (rendererView) {
        rendererView.dispose();
      }
    }
  };
```

Although chat isn&apos;t included in this app, you can clone the repo at [https://github.com/Azure-Samples/communication-services-web-chat-hero](https://github.com/Azure-Samples/communication-services-web-chat-hero), update the connection string for your ACS resource in Azure in **appsettings.json**, and run the app to try it out. Check out the **Chat/ClientApp/src/components/ChatScreen.tsx** component to learn more about how it works or see the link to the chat docs in the Additional Resources section below.

While this walkthrough only scratches the surface, it should give you an idea of what can be done with Azure Communication Services and help you get started using it. A more simple &quot;vanilla JavaScript&quot; walkthrough can be found here if you&apos;re interested:

[Quickstart: Add voice calling to your app](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/getting-started-with-calling?pivots=platform-web&amp;WT.mc_id=m365-19559-dwahlin)

## Conclusion

In today&apos;s connected world users expect to be able to press a button to start a call, initiate a chat, or send an SMS message. While that type of functionality has been baked into mobile devices for years, the ability to add audio, video, chat, and SMS into the applications your customers and employees use everyday can enhance their productivity and efficiency in many scenarios.

If you&apos;re interested in learning more about Azure Communication Services visit the docs listed in the Additional Resources below. You can also watch a video I put together that discusses Azure Communication Services and shows the calling &quot;hero&quot; demo in action.

https://www.youtube.com/watch?v=SM2Rgyi\_0XU

### Additional Resources:

- [Azure Communication Service Docs](https://docs.microsoft.com/en-us/azure/communication-services/overview/?WT.mc_id=m365-19559-dwahlin)
- [Communication Services Web Calling Hero Demo](https://github.com/Azure-Samples/communication-services-web-calling-hero/?WT.mc_id=m365-19559-dwahlin)
- [Communication Services Chat Hero Demo](https://github.com/Azure-Samples/communication-services-web-chat-hero/?WT.mc_id=m365-19559-dwahlin)
- [Using Voice and Video](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/getting-started-with-calling?pivots=platform-web&amp;WT.mc_id=m365-19559-dwahlin)
- [Using Chat](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/chat/get-started?pivots=programming-language-javascript&amp;WT.mc_id=m365-19559-dwahlin)
- [Using SMS](https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/telephony-sms/send?pivots=programming-language-javascript&amp;WT.mc_id=m365-19559-dwahlin)
- [Azure Communication Services Pricing](https://azure.microsoft.com/en-us/pricing/details/communication-services/?WT.mc_id=m365-19559-dwahlin)</content:encoded></item><item><title>It&apos;s Time for a Change</title><link>https://blog.codewithdan.com/its-time-for-a-change/</link><guid isPermaLink="true">https://blog.codewithdan.com/its-time-for-a-change/</guid><pubDate>Mon, 29 Jun 2020 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/its-time-for-a-change/timeForAChange-1024x683.webp)

You&apos;ve probably heard the old adage, &quot;Change is good&quot; at some point in your life. Although change can be hard, I&apos;ve found that for me personally, it&apos;s the only way to grow and move forward in life. It was a big (and hard) change starting my own company 20 years ago but I wouldn&apos;t trade the experience I&apos;ve gained for anything.

To set the stage for this post, let me share a little about what I&apos;ve been doing, walk you through what I&apos;ve been thinking, and then discuss the next big change I&apos;ve decided to make (jump to the bottom if you don&apos;t care about the overall thought process :-)). Over the past 20 years I&apos;ve been running my own consulting and training company called Wahlin Consulting and really enjoyed it. I&apos;ve consulted with large and small companies around the world about various technologies, designed enterprise-level architectures, helped developers improve their code, helped move companies to the Cloud (specifically Azure), developed over 100 instructor-led and video training courses, and taught millions of developers (in person and through video courses) about various technologies as well.

While running a company is something I can continue doing as long as I&apos;d like, over the past few years I&apos;ve found myself asking the same question more frequently....&quot;What comes next?&quot;. Staying at home during COVID provided me with plenty of time to think and evaluate if I want to keep doing the same thing or change things up entirely and tackle new challenges. I started working through pros and cons and thinking about what I really wanted to do in the next chapter of my life.

## Pros and Cons - Is it all Roses?

![](/images/blog/its-time-for-a-change/roses-1024x678.webp)

The pros of running your own company far outweigh the cons I&apos;ve found but it&apos;s not all roses. There&apos;s one aspect of my job that I&apos;ve liked less and less over the years - business travel. There have been many years throughout my career where I&apos;ve been gone 2 - 3 weeks per month. Many times I&apos;d head to the airport Sunday morning, fly to some location around the world, and then get back late Friday night. The next Sunday would roll around and I&apos;d do it all over again. Although my family would come along when I went to &quot;fun&quot; places in the summer, most of the weeks I was away from them and on my own.

Don&apos;t get me wrong, travel is fun and the financial aspect was great, but it can definitely wear you down over time especially when you want to be there for your family. Sure, you get top level &quot;status&quot; and points with airlines and hotels which sounds nice at first, but ultimately that means you&apos;re away from home a lot. I&apos;ve found myself dreading going to the airport, staying in yet another hotel, and eating yet another dinner alone in some new city more and more as I got older. Being stuck at home during COVID really made me realize that I&apos;d like to get to a point where I don&apos;t have to travel so much.

Another downside of running a company is the number of hours you typically work in a given week, the pressure that comes with keeping the business going during good times and bad, negotiating and finalizing contracts, always being on the lookout for the next contract, dealing with corporate taxes (thankfully my wife handles the finances), handling customer issues that come up, and more. You learn to work &quot;smarter&quot; versus &quot;harder&quot; as you gain more experience but the long hours and stress never really end (note that it&apos;s not lost on me that many of you working for a company feel the same way as well at times). As mentioned, the pros of running your own company far outweigh the cons in my experience, and having flexibility to work where and when you want is great. But, there&apos;s an associated cost. Nothing comes for free.

Having said all of that, do I recommend venturing out on your own? Absolutely! There&apos;s a lot of life, career, and entrepreneurial experiences and lessons that can be learned and there&apos;s a really big financial upside if you do it right. I&apos;ve talked about many of the pros and cons of running your own business or being an &quot;entrepreneurial coder&quot; on different podcasts if you&apos;re interested ([here](https://www.ecpodcast.io/episodes/27-dan-wahlin-how-to-establish-yourself-as-an-entrepreneurial-coder) and [here](https://cloudskills.fm/068)). Although running a company certainly isn&apos;t for everyone, I do feel strongly that everyone should start a &quot;side hustle&quot; if possible to bring in extra income and add additional stability and flexibility. The trick is figuring out the proper work/life balance.

That circles this little story back to &quot;Change is good&quot;. In the back of my mind I knew I was looking to change things up, travel less, and try something new. COVID (sadly) gave me plenty of time to stay home and think about it. I talked it over with my wife and we both agreed it was time for me to change things up career-wise. I let a few close friends know my thoughts and started thinking through what my next steps would be. Should I continue doing what I knew worked really well from a financial, flexibility, and comfort standpoint or should I venture into the unknown, mix things up, and tackle some new challenges? Sometimes timing is everything as you&apos;ll see next.

## The Big Change

I received an email in April asking if I&apos;d be interested in considering a brand new position that was opening up on the Microsoft Cloud Advocates team. The role would bridge [Microsoft 365](https://www.microsoft.com/en-us/microsoft-365/enterprise) and [Azure](https://azure.microsoft.com/) and focus on a new real-time collaboration framework being released in the near future called the [Fluid Framework](https://channel9.msdn.com/events/Build/2020/INT113?term=fluid%20framework%20nick&amp;lang-en=true).

I know a lot of great people on the different Cloud Advocates teams and had considered moving into that type of role several years ago when the group was first started. As a result, I decided to go through the process and see what happened. After finishing the &quot;interview loop&quot;, talking more with people inside and outside of Microsoft (including my friends John Papa, Craig Shoemaker, Shayne Boyer, and Jason Helmick who all work at Microsoft - thanks for your feedback guys!), getting buy-in from my wife, and spending **a lot** of time thinking through it on my own, I decided to make the jump and take the Cloud Advocate role at Microsoft.

In this new role I&apos;ll have the opportunity to do a lot of the same things I&apos;ve been doing over the years which played a big role in my decision. I&apos;ll still get to interact with and help developers, architects, and technical management, listen to feedback, work with various communities, build apps, create training content, speak at conferences and other events, record videos, and more.

I&apos;ll admit that this type of change is a little scary and intimidating especially since I&apos;m giving up something stable that my wife and I built up over 20 years, but I do believe that &quot;Change is good&quot; and I&apos;m ready for a change. I&apos;m looking forward to working with the Fluid, Microsoft 365, and Azure teams and to the challenges and new experiences that lie ahead!

![](/images/blog/its-time-for-a-change/Microsoft-logo-1024x459.webp)</content:encoded></item><item><title>Video: Building and Running Custom ASP.NET Core Containers</title><link>https://blog.codewithdan.com/video-building-and-running-custom-asp-net-core-containers/</link><guid isPermaLink="true">https://blog.codewithdan.com/video-building-and-running-custom-asp-net-core-containers/</guid><pubDate>Sun, 26 Apr 2020 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/video-building-and-running-custom-asp-net-core-containers/2020-04-26_14-10-30-1024x576.webp)

I recently had the opportunity to do a webinar for Pluralsight where I talked about how you can use Docker to build and run custom ASP.NET Core containers. The containers can be run locally on your machine, on a company server within an on-prem data center, or even in the cloud. Here are the specific topics that I covered in this session:

- Docker Concept Review
- Docker and ASP.NET Core
- Creating a Custom Dockerfile
- Multi-Stage Dockerfiles
- Running ASP.NET Core Containers in Azure

You can view the recording from the webinar below. The slides from the presentation can be [found here](https://codewithdan.me/docker-aspnetcore).

## Building and Running Custom ASP.NET Core Containers

https://www.youtube.com/watch?v=Tfng7tKs2kc

If you&apos;d like to learn more about containers, Kubernetes, and much more [check out my courses on Pluralsight.com](https://pluralsight.pxf.io/KL0Ov).

[![](/images/blog/video-building-and-running-custom-asp-net-core-containers/my-ps-courses.webp)](https://pluralsight.pxf.io/KL0Ov)</content:encoded></item><item><title>5 Actions You Can Take To Reduce Anxiety/Stress and Increase Overall Wellbeing</title><link>https://blog.codewithdan.com/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/</link><guid isPermaLink="true">https://blog.codewithdan.com/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/</guid><pubDate>Mon, 09 Mar 2020 00:00:00 GMT</pubDate><content:encoded>&lt;figure&gt;

![](/images/blog/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/serenity-1024x463.webp)

&lt;figcaption&gt;

Zen Garden Meditation by [EliasSch](https://pixabay.com/users/eliassch-3372715)

&lt;/figcaption&gt;

&lt;/figure&gt;

Have you struggled much with anxiety or worry in your life? They can be easy to avoid when things are going good, but how do you handle them in more stressful situations? What can you do to help minimize anxiety, worry, and stress in your job, family, and other areas of life?

As someone who has struggled with anxiety and worry from time to time over the years I asked myself the same question and decided that I needed to take action. This post details a few strategies and actions that have really helped me in a big, big way. I hope you find them useful as well - I&apos;m living proof that they can work. So what was I anxious or worried about? Well...a lot of things.

Everybody is different of course, but for me personally I&apos;d be anxious about how I&apos;d play in a sporting event (not simply nervous, but anxious and stressed out), I was anxious about how I&apos;d do on a quiz or test at school, I was anxious and quite honestly fearful about taking certain classes because someone said they were &quot;really hard&quot; (they didn&apos;t end up being that hard it turns out), I was anxious about what people thought about me and if they &quot;liked&quot; me, I was anxious about world events that were happening (coronavirus anyone??), I was anxious about talks I had to prepare for and give at conferences, and the list goes on and on. It&apos;s mentally and physically draining to say the least.

Anxiety and worry were something I experienced even as a little kid apparently. While I don&apos;t remember much from my younger years, I do remember receiving the &quot;Head Worrywart&quot; award from my 2nd grade teacher. I was happy to receive any &quot;award&quot; at that age but obviously didn&apos;t understand what she was implying at the time. On a side note, who gives out that kind of &quot;award&quot; to a little kid? :-)

Anxiety was so bad at times before a given semester started in college that I&apos;d literally sleep for nearly 3 days before the first day of class. I was \*that\* worried, anxious, and stressed out over how I&apos;d do and if I could \*still\* get good grades. Ridiculous? Yeah - in hindsight the way I dealt with that scenario and others was absolutely ridiculous. But looking back I now realize &quot;I didn&apos;t know what I didn&apos;t know&quot;.

I&apos;m guessing that you can relate to some of these experiences and probably have many of your own that you can add to the list. If there&apos;s one thing I&apos;ve learned about being anxious and worried it&apos;s that you&apos;re not alone. Far from it! Many people struggle with severe anxiety that holds them back from reaching their true potential. Some people hide it better than others, some are fortunate to not have to deal with it as much, and still others have learned techniques to deal with it from their parents or on their own.

If older me could talk to younger me I would&apos;ve told myself to &quot;chill out&quot; and relax. Being anxious and worried doesn&apos;t help you in most scenarios (although it can act as a motivator or a depressant for some). If you&apos;ve ever dealt with anxiety you know that it&apos;s not quite as simple as deciding to &quot;stop being anxious&quot;. The younger less experienced version of me didn&apos;t have the necessary skills or tools to deal with the problem.

The upside of my anxiety and stress was that instead of caving into it, it actually helped motivate me to succeed in school, sports, career, life, etc. However, it also took a toll on me and in my early 40s I realized I needed to make some serious life changes to be less stressed out, less anxious, and enhance my overall wellbeing. I knew that it would eventually affect my health, mental state, relationships, and more if I didn&apos;t find a way to deal with it better.

So what did I do? While there are many actions you can take to decrease anxiety, here are 5 specific ones that I recommend.

## Actions You Can Take to Reduce Anxiety and Worry

As I hit my 40s I was motivated (due to a few life scenarios that happened) to change things up and started researching anything I could find on the subject of anxiety and worry. I started reading book after book on the subject to learn more.

The first book I remember reading was [How to Stop Worrying and Start Living](https://www.amazon.com/How-Stop-Worrying-Start-Living/dp/1607964007) by Dale Carnegie which I highly recommend. It&apos;s an &quot;oldie but goodie&quot;. In addition to that book and many others, I researched how the mind worked, the role of the [amygdala](https://en.wikipedia.org/wiki/Amygdala) and [prefrontal cortex](https://en.wikipedia.org/wiki/Prefrontal_cortex) (as well as other key areas of the brain), how thoughts work, meditation techniques, mindfulness, and other strategies and practices that can be used to reduce anxiety/stress/fear and increase overall wellbeing. Researching these topics became a hobby that I really enjoyed and still enjoy today.

Although I&apos;ve talked with a few close family members and friends about what I&apos;ve learned and what I do, I&apos;ve never publicly shared information about how it&apos;s impacted my life. The younger me would never open up about these struggles and what I&apos;ve done to deal with them. I mean come on, someone may not like what I have to say right?! Fortunately, while the older me respects others&apos; opinions and constructive criticism they may send my way, I don&apos;t let other people determine how I feel.

With all of the seemingly non-stop negative events going on in the world I put out this tweet recently to provide some simple guidance to people who may be struggling with stress and anxiety:

[![](/images/blog/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/2020-03-10_09-41-09-812x1024.webp)](https://twitter.com/DanWahlin/status/1237098133877747712)

Here&apos;s more information about these 5 actions and why I chose them.

## 1\. Don&apos;t Watch TV

This might be the simplest change I made to reduce stress, anxiety, worry, and fear. While there are certainly some good shows, documentaries, and movies on TV, you have to go out of your way to find them. Several of the shows and news stations prey upon people&apos;s fears and sadly make money off of triggering anxiety and worry in viewers.

The number one benefit of turning off the TV is it forces you to focus your attention elsewhere and if you use the time wisely you won&apos;t miss TV at all. Another side effect is that you aren&apos;t bombarded with constant negativity....especially from the media.

Go look at the website for your favorite news network when a crisis is happening. Why are the majority of articles and videos about the crisis? Because people need to hear about it in 100 different ways? No (and I realize I&apos;m stating the obvious here), because it keeps people glued to the TV or to the website which makes the media company money. They could care less if you&apos;re stressed out over the events. In fact, the more stressed you are the more you feel like you need to &quot;tune in&quot; and make sure you don&apos;t miss anything.

**Turn off the TV and notice how much better you feel!** I promise the world will still be there and things will be fine. I still watch shows on NetFlix, Curiosity Stream, Disney+, various apps, etc. but I limit my time to a few hours a week. I keep up with the news by spending a few minutes a day visiting different news sites (keeping in mind that what they publish is geared toward making money). I never feel out of the loop about world events as I talk with friends and I quite honestly don&apos;t miss TV at all. I spend the time I save on more worthwhile activities.

## 2\. Spend Time Each Day Meditating/Quieting Your Mind

This is the biggest change I made in my life by far. It was a lot harder than simply turning off the TV, but it&apos;s worth it. I mentioned earlier that the younger me didn&apos;t have the necessary tools and skills to deal with anxiety, stress, and fear. My mind was like a stormy ocean with huge waves coming and going depending on the situation. Sometimes I&apos;d ruminate as a mental wave hit and it&apos;d go from being a smaller 5 foot wave to over 100 feet in a matter of minutes.

![](/images/blog/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/storm-1024x597.webp)

If I had a mental surfboard maybe that&apos;d be fun, but I definitely didn&apos;t have one. I wanted calm seas and it was rare to have them with everything going on in life. For me personally, that&apos;s where meditation helps. With enough practice you can learn how to &quot;calm the seas of your mind&quot;.

![](/images/blog/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/calm_seas-1024x676.webp)

Sound a bit &quot;hippie&quot; or something only monks would do? I thought that at first but quickly realized I was wrong. I didn&apos;t know what I didn&apos;t know (yet again). Meditation turned out to be extremely helpful and worthwhile.

Here&apos;s a challenge for you, go into a quiet room, sit in a comfortable position, close your eyes, and start counting your in and out breaths. How long did you go before your mind jumped to other thoughts and you completely forgot about your original intent of counting your in and out breaths? For me, it was hard to get past 5 breaths without forgetting what I was there to do.

I can honestly say that many of my first attempts at &quot;meditation&quot; were absolute failures. I felt like I sat there with my eyes closed, listened/felt my breathing for a few seconds, and then immediately jumped to thinking about things that were stressing me out. Then other thoughts like, &quot;You can&apos;t do this!&quot;, &quot;Why are you wasting your time?&quot;, &quot;This doesn&apos;t work!&quot;, came to mind and I actually felt bad about an experience that was supposed to make me feel good. But, I kept at it and slowly became better and better at focusing on my breath and calming my mind.

So how do you get started? I&apos;d recommend starting by sitting in a quiet place, closing your eyes, and listening/feeling your breath (inhalations and exhalations) for 60 seconds. If a thought pops into your mind (which it absolutely will), treat it as a cloud floating by and then switch back to focusing on your breath.

If you&apos;d like official guidance, information about [research studies](https://nccih.nih.gov/health/meditation/overview.htm), etc., there are many books available. Here are a few I&apos;ve enjoyed:

- [Meditation for Beginners](https://www.amazon.com/Meditation-Beginners-Jack-Kornfield-Ph-D/dp/1591799422) by Jack Kornfield
- [Meditation is Not What You Think](https://www.amazon.com/Meditation-Not-What-You-Think/dp/0316411744) by Jon Kabat-Zinn
- [The Untethered Soul](https://www.amazon.com/Untethered-Soul-Journey-Beyond-Yourself/dp/1572245379) by Michael A. Singer
- [The Little Book of Being](https://www.amazon.com/Little-Book-Being-Practices-Uncovering/dp/1683642171) by Diana Winston
- [Many more...](https://www.amazon.com/s?k=meditation&amp;ref=nb_sb_noss_1)

I think using an app, audio recording, or video is the best way to get started initially. Once you get more experience you can do various meditation practices on your own. Here are a few of the apps on my phone that I use:

![](/images/blog/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/2020-03-09_15-45-53.webp)

Set aside 60 seconds a day to start, then move up to 5 minutes, 30 minutes, and so on to practice. You can do it on the bus or riding the subway or train on the way to work (please don&apos;t do it while driving :-)). It made a huge difference in my life for &quot;calming the seas&quot;. A nice side effect is by practicing meditation and learning to focus on calming my mind I became more focused during work and in other areas of life as well.

## 3\. Learn to Monitor Your Thoughts

There&apos;s a term called &quot;mindfulness&quot; that you&apos;ll hear a lot if you dive more into meditation. While there are several definitions of it, here&apos;s one that I like:

&gt; Mindfulness
&gt; 
&gt; A mental state achieved by focusing one&apos;s awareness on the present moment, while calmly acknowledging and accepting one&apos;s feelings, thoughts, and bodily sensations, used as a therapeutic technique.
&gt; 
&gt; [https://www.lexico.com/en/definition/mindfulness](https://www.lexico.com/en/definition/mindfulness)

Mindfulness is all about being aware of what you&apos;re thinking about in the present moment, understanding how you&apos;re feeling, and accepting emotions that may be uncomfortable.

How has being more &quot;mindful&quot; helped me? One of my #1 places to get stressed out (on occasion) is the shower. You&apos;d think that&apos;d be relaxing and it normally is, but because I have nothing else to do there my mind starts racing about the day&apos;s tasks, someone who responded rudely to an email, that person I helped that didn&apos;t even say &quot;thanks&quot;, something my wife said, and on and on. My old self would let those thoughts run rampant and before I knew it a little 1 foot mental wave was 100 feet tall and completely crushing me. I could literally make myself mad, stressed out, anxious, or (chose some negative emotion) in the space of 5 - 10 minutes.

That&apos;s not the case now. Why? Because I&apos;m able to be &quot;mindful&quot; of what I&apos;m thinking about, much more aware of how I&apos;m feeling, and better able to monitor my thoughts and emotions overall. A person who is very mindful often times has a high [emotional intelligence](https://www.psychologytoday.com/us/basics/emotional-intelligence) (I&apos;m still working on that one :-)). Go online to any social media site and you&apos;ll find that many people aren&apos;t mindful at all. They literally let their emotions at the present moment dictate what they type. Typing with emotion might possibly be the stupidest way to type. If you&apos;ve ever sent an email that you later regretted you know what I mean. By being more mindful you&apos;ll find many scenarios aren&apos;t stressful anymore.

For example, when I get allergies or if I&apos;m really tired I tend to be pretty cranky. That has led to family scenarios where I turned something little into something big. I won&apos;t say I&apos;m even close to perfect with that now, but I can say with confidence that I&apos;m much better at catching when I feel that way and then avoiding scenarios that I&apos;m not up to handling well at the time.

Someone&apos;s rude to you in a meeting, online, or on a call? If you&apos;re mindful you&apos;ll be able to notice your emotions, feel anger building up, feel your pulse increasing, feel a hotness in your body (or whatever feeling you get), and then quickly counter with a thought like, &quot;Relax and calm the mental waves before responding.&quot;

![](/images/blog/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/angry-1024x683.webp)

I came up with a term I call the &quot;golden pause&quot; which is the ability to have someone talk rudely to me, catch how I&apos;m feeling about it, pause for a few seconds to calm the waves, and then respond (hopefully) with more control. No, I still have my moments but I&apos;m much better than I used to be and I&apos;m confident in saying that you will be too if you consistently practice and work at it. It&apos;s important to note that &quot;mindfulness&quot; isn&apos;t necessarily the same thing as &quot;meditation&quot;. You can be mindful at any moment and at any time whether it&apos;s a good or bad situation.

## 4\. Exercise

![](/images/blog/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/walking-e1583804774396-1024x872.webp)

I&apos;ll have to admit that exercising was a lot easier when I was playing sports in high school and college. As I&apos;ve gotten older and traveled a lot for business, I&apos;ve come up with plenty of excuses to avoid exercising. However, what I&apos;ve found is that exercise is extremely critical to calm the mental waves and help reduce anxiety. I definitely notice a change when I don&apos;t do enough exercise. The topic has been covered so much in the media, [research reports](https://adaa.org/living-with-anxiety/managing-anxiety/exercise-stress-and-anxiety), magazine articles, blog posts, etc. that I won&apos;t bore you with more details here because everyone knows they should exercise more.

The bottom line is getting outside has many positive mental benefits and I&apos;ve found that alone can help calm anxiety and worry. Getting outside and exercising obviously has many physical and mental benefits as well. If you&apos;re not spending at least 30 minutes exercising a day I&apos;d challenge you to do that. Since I like to listen to a lot of audio books on various topics, going out for a jog or walk is a great way to listen and learn while also benefiting my mind and body.

You can even [meditate while walking](https://ggia.berkeley.edu/practice/walking_meditation)!

## 5\. Show Gratitude

When was the last time you expressed gratitude for all of the things you have in life? Unfortunately, in today&apos;s world people tend to focus on everything that&apos;s wrong in the world or on what they **don&apos;t** have. The Jones&apos; bought a fancy new car and I want one too! Tim&apos;s family went on a super cool vacation to Europe and we can&apos;t afford that right now. Michelle&apos;s kid is excelling in school and mine is struggling. The list goes on and on.

Someone will always be doing &quot;better&quot; than you and have something you want. That&apos;s a simple fact of life. It&apos;s unfortunate that social media tends to switch our focus to what we don&apos;t have, how we compare with others, and forget all of the good things we have going in life.

![](/images/blog/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/gratitude-1024x768.webp)

One of the single biggest changes I&apos;ve made in my daily routine is to wake up and take a few moments to express gratitude for what I have in life. I have a phenomenal wife and kids, the best parents a person could ask for, a job I enjoy doing, no major health issues, great friends, and more &quot;stuff&quot; than I actually need. I make it a point to show gratitude and acknowledge the things I do have in the morning to start my day off. It&apos;s amazing how this one simple action can affect your whole day.

## 6\. (Bonus) Learn Breathing Exercises

I couldn&apos;t include this one in the tweet I mentioned earlier (due to size limitations) but it&apos;s something that has had a huge impact on my life. If you need a tool to help before, during, or after stressful situations then look no further than breathing exercises.

For example, I like to do something called [Box Breathing](https://www.medicalnewstoday.com/articles/321805#the-box-breathing-method) that I learned from a book called [Unbeatable Mind: Forge Resiliency and Mental Toughness to Succeed at an Elite Level](https://www.amazon.com/Unbeatable-Mind-Resiliency-Toughness-Succeed/dp/1508730512/ref=tmm_pap_swatch_0?_encoding=UTF8&amp;qid=1583858847&amp;sr=8-1) written by former Navy SEAL Mark Divine. There are many variations of this technique, but to get started you inhale for 5 seconds, hold for 5 seconds, exhale for 5 seconds, and then make sure all of the remaining air is out the last 5 seconds. Repeat that for as long as you&apos;d like and then expand the times as you get better at it. I&apos;ll do it before going on stage to give a talk at a large event or in other scenarios where I want to relax so that I can do my best.

I use an app called [Pranayama](https://saagara.com/apps/breathing/health-through-breath-pranayama) on my phone that helps me time these exercises and provides additional breathing exercises:

![](/images/blog/5-actions-you-can-take-to-reduce-anxiety-stress-and-increase-overall-wellbeing/breathing-473x1024.webp)

## Additional Actions

There are many more ideas that I could discuss to help with calming anxiety and stress. Here are a few others to consider:

1. **Look for the good in others - even those you don&apos;t like or who don&apos;t like you.** I&apos;m not much of a &quot;turn the other cheek&quot; kind of guy (anyone who knows me can attest to that), but rather than focusing on the negative in others look for the positive and watch how that changes how you view them. This applies to groups you may not like, people with opposing political or professional views, etc.  
    
2. **Get your finances in order.** If you&apos;re spending more than you make get a budget in place and stick to it. In my early married years my wife and I had very little money so we used a budgeting technique we learned that organized all expenses into columns (food, clothes, entertainment, rent, car, insurance, etc.). It worked wonders for saving money even when we didn&apos;t have much to save at the time and gave us a feeling of financial freedom.  
    
3. **Spend more time reading** - pick anything! I wouldn&apos;t have learned about what can be done to help with stress and anxiety without reading. I read or listen to something on the subject nearly every day. Although I enjoy listening to music as well, when I&apos;m exercising (walking/jogging) or driving I turn on Audible books many times and leverage the time to learn.  
    
4. **Play a puzzle game like Sudoku, a cross-word puzzle, or something similar each day**. I&apos;ve significantly increased by ability to focus by playing a few games of Sudoku every day. While I&apos;m focused on the game it&apos;s harder for anxiety/worry to affect me.  
    
5. **Minimize your interactions on social media**. Life will still go on without you sharing your pictures, opinions, and status updates. The added benefit is that you&apos;ll miss seeing the negative aspects of social media as well. You see people doing something amazing and wish you could do that so you feel a little down, someone received a raise but your company isn&apos;t doing anything to bump your salary, etc. I stopped following people that use social media to vent every time they were frustrated about something (pro tip - don&apos;t do that), or if they share why everyone else is wrong if their political beliefs aren&apos;t the same as theirs, etc. While I&apos;m still very active on [Twitter](https://twitter.com/danwahlin), I&apos;ve cut back on how often I visit and tweet.  
    
6. **Stop looking for outside validation from others.** I&apos;m as guilty as the next person when it comes to this one. Everyone wants to be liked, adored by your social media fans, have songs of praise sung about you, etc. :-) If there&apos;s one thing I&apos;ve learned in life it&apos;s that outside validation will never be enough no matter how many friends or social media followers you have. Having thousands or millions of &quot;followers&quot; on social media doesn&apos;t provide proof that you matter. How many famous people have died and never heard about again? There&apos;s a great quote from [Marcus Aurelius&apos;s Meditations](https://www.amazon.com/gp/product/0812968255) about this (translated and adapted by Gregory Hays). He says, &quot;It never ceases to amaze me: we all love ourselves more than other people, but care more about their opinion than our own. If a god appeared to us — or a wise human being, even — and prohibited us from concealing our thoughts or imagining anything without immediately shouting it out, we wouldn’t make it through a single day. That’s how much we value other people’s opinions — instead of our own.&quot; You have to build confidence from the inside out (not outside in) and practicing meditation and mindfulness can help you overcome your dependency on outside validation.  
    
7. **Help others.** If you&apos;ve ever dedicated time to help others in need you know how good it makes you feel. It&apos;s a great way to reduce your anxiety given that often times you realize how good you have it compared to what others are going through. It also helps develop more empathy - something the world seems to be lacking now days. Volunteer at a homeless shelter, food bank, animal shelter, school, or really anywhere that lets you help others.  
    
8. **What music do you listen to?** Sometimes when I&apos;m running I like some &quot;pump me up&quot; type music to get going. Or, I might have a task I need to get done that requires a little extra motivation to get started. In those cases I&apos;ll occasionally listen to alternative/nu metal music, some hard rock music from my younger years, or something similar but it&apos;s pretty rare now. I&apos;ve really cut back on music that I know causes stress or doesn&apos;t have a positive &quot;vibe&quot; to it. Music is such a personal decision that I don&apos;t have any specific recommendations here. I&apos;d simply encourage you to review what you listen to and ask the question, &quot;Is this helping me manage my stress and anxiety or making it worse?&quot;.  
    
9. **Do a &quot;technology detox&quot;.** How many times a day do you check email, social media, websites, and other information sources? If you&apos;re like me you probably don&apos;t know the answer but can safely say, &quot;Way too much!&quot;. Try cutting back on that usage and stop using your phone so much.

## Conclusion

I wish I could finish up by saying that these 5 actions have resulted in mastering all aspects of my life! That&apos;s not the case though and I don&apos;t expect to ever achieve that level of mastery. I still get anxious and worry about things from time to time, get angry over certain scenarios, overreact in some scenarios, and still fear doing certain things. The reality is I struggle with anxiety and worry like everyone else on the planet. It&apos;s called being human.

My struggles are much more manageable than before I started implementing these actions and others. There are many situations that would&apos;ve sent me into a rage or completely ruined my day that I let pass by now. I&apos;m able to calm the mental waves much easier before they turn into a raging storm. Part of that is age and experience, but a lot of it is what I PRACTICE and DO on a daily basis.

If you&apos;re struggling with anxiety/worry/fear right now I&apos;d challenge you to sit back and take stock of your life. What are you doing to help yourself? What are you doing that you know you should stop? Would implementing any of these actions (or others) in your life help out? Having said that, if you don&apos;t think you can manage what you&apos;re going through on your own then make it a point to seek out professional help. It takes a strong person to ask for help from others, it&apos;s absolutely not a sign of weakness.

Some people refuse to acknowledge that they need help or that they need to change. Don&apos;t be one of those (stubborn) people. It&apos;s easy to do nothing and pretend that everything is great when you know deep inside it&apos;s not. I finally realized that right around the time I turned 40. What if I would&apos;ve implemented some of these actions in my 20s or 30s though? How would my life and my family&apos;s life have changed?

My challenge to you is to give a few of these ideas a try and see how you can reduce your stress and anxiety while enhancing your overall wellbeing. I&apos;m living proof that it can be done.

Have other actions you take that help you manage stress, anxiety, and worry? Leave a comment to let me know about what you do.</content:encoded></item><item><title>New Pluralsight Course - Kubernetes for Developers: Deploying Your Code</title><link>https://blog.codewithdan.com/new-pluralsight-course-kubernetes-for-developers-deploying-your-code/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-pluralsight-course-kubernetes-for-developers-deploying-your-code/</guid><pubDate>Sat, 29 Feb 2020 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/new-pluralsight-course-kubernetes-for-developers-deploying-your-code/2020-02-29_18-54-59-1024x272.webp)](https://pluralsight.pxf.io/bo0jv)

I’m excited to announce the release of my next course on Pluralsight titled [Kubernetes for Developers: Deploying Your Code](https://pluralsight.pxf.io/bo0jv)! This course is the next in the Kubernetes for Developers learning path and focuses on different techniques that can be used to deploy your containerized applications to Kubernetes. It follows the [Kubernetes for Developers: Core Concepts](https://pluralsight.pxf.io/R9W2N) course.

Here&apos;s the recommended order for taking my courses that cover containers and Kubernetes:

1. [Docker for Web Developers](https://pluralsight.pxf.io/Nqm2V)
2. [Kubernetes for Developers: Core Concepts](https://pluralsight.pxf.io/R9W2N)
3. [Kubernetes for Developers: Deploying Your Code](https://pluralsight.pxf.io/bo0jv)

Here are a few questions this course will help you answer:

- What is a Kubernetes Deployment and how do I create one?
- What are the benefits of using Deployments?
- What kubectl commands can be used to create or modify a Deployment?
- What if the Deployment has a problem - how do I rollback?
- What is a Rolling Update and how does it work?
- Can I control the minimum and maximum number of Pods available during a Rolling Update?
- What is a Canary Deployment and why would I want to use that technique?
- What are the advantages of a Blue-Green Deployment?
- What role do Kubernetes Services play with Canary and Blue-Green Deployments?
- What are Jobs and CronJobs?
- How do I create a Job on a scheduled basis (CronJob)?
- How important is it to monitor Kubernetes?
- What monitoring tools are available to monitor Deployments and other Kubernetes resources?
- How can I use kubectl to troubleshoot Deployments that are having an issue?
- And much more…

Here’s a quick visual summary of what the course covers:

# [Kubernetes for Developers: Deploying Your Code](https://pluralsight.pxf.io/bo0jv)

Deploying code to different environments can be challenging! In the Kubernetes for **Developers: Deploying Your Code** course you’ll learn about different deployment techniques that can be used to ensure your code and applications work correctly. The course starts out by providing a look at how deployments work in Kubernetes. This includes showing how to define a deployment using YAML and migrate it to Kubernetes using the kubectl tool. From there you&apos;ll learn how Rolling Deployments work, the benefits they offer, and how you can roll back a deployment if something goes wrong. 

Next, you&apos;ll learn about Canary Deployments, the role they can play to ensure code updates run properly, and when they&apos;re appropriate to use. Blue-Green Deployments are discussed next. With this Deployment technique you can roll out a new version of a Deployment, test it to ensure it works properly, and then route production traffic to it once it&apos;s deemed ready. You&apos;ll then learn about Jobs and CronJobs. Learn how to run a one-time job or even run a job on a schedule using the Cron format.

Finally, you&apos;ll learn about different monitoring and troubleshooting tools such as Prometheus and Grafana that can be used to monitor Kubernetes and provide alerts when things go wrong. You&apos;ll also learn key troubleshooting commands that you can run to obtain more information about problems that arise. When you’re finished with this course, you’ll have the skills and knowledge required to deploy your code and ensure it works properly in a Kubernetes cluster.

# Course Modules

1. Course Overview
2. Kubernetes Deployments Overview
    - Overview, Prerequisites, and Code Samples
    - Introduction
    - Kubernetes Deployments Overview
    - Creating an Initial Deployment
    - Kubernetes Deployments in Action
    - Kubernetes Deployment Options
    - Summary
3. Performing Rolling Update Deployments
    - Introduction
    - Understanding Rolling Update Deployments
    - Creating a Rolling Update Deployment
    - Rolling Update Deployment in Action
    - Rolling Back Deployments
    - Rolling Back Deployments in Action
    - Summary
4. Performing Canary Deployments
    - Introduction
    - Understanding Canary Deployments
    - Creating a Canary Deployment
    - Canary Deployments in Action
    - Summary
5. Performing Blue-Green Deployments
    
    - Introduction
    - Understanding Blue-Green Deployments
    - Creating a Blue-Green Deployment
    - Blue-Green Deployments in Action - The Blue Deployment
    - Blue-Green Deployments in Action - The Green Deployment
    
    - Summary
6. Running Jobs and CronJobs
    - Introduction
    - Understanding Jobs
    - Understanding CronJobs
    - Creating a Job and CronJob
    - Jobs in Action
    - CronJobs in Action
    - Summary
7. Performing Monitoring and Troubleshooting Tasks
    - Introduction
    - Monitoring and Troubleshooting Overview
    - Web UI Dashboard in Action
    - Metrics Server, kube-state-metrics, and Prometheus in Action
    - Grafana in Action
    - Troubleshooting Techniques with kubectl
    - Troubleshooting Techniques in Action
    - Summary
8. Putting It All Together
    - Reviewing Deployment Options

I hope you enjoy [the course](https://pluralsight.pxf.io/bo0jv) and learn more about different Deployment options and techniques that can be used with Kubernetes!</content:encoded></item><item><title>Using the Kubernetes JavaScript Client Library</title><link>https://blog.codewithdan.com/using-the-kubernetes-javascript-client-library/</link><guid isPermaLink="true">https://blog.codewithdan.com/using-the-kubernetes-javascript-client-library/</guid><pubDate>Tue, 25 Feb 2020 00:00:00 GMT</pubDate><content:encoded>I&apos;ve been working with Kubernetes a lot and focusing on various deployment techniques that can be used (such as Blue-Green Deployments) for a Pluralsight course I&apos;m creating called **Kubernetes for Developers: Deploying Your Code**. If you&apos;re new to Blue-Green Deployments, here&apos;s a quick overview:

![](/images/blog/using-the-kubernetes-javascript-client-library/k8s-blue-green.gif)

While I was working on the course, [Dr. Christian Geuer-Pollmann](https://twitter.com/chgeuer) and I had [chatted on Twitter](https://twitter.com/chgeuer/status/1224766196416905217) about a Blue-Green dashboard he wrote. He did a great job on it! I&apos;ve been wanting to experiment with the [JavaScript Kubernetes Client](https://github.com/kubernetes-client/javascript#readme) library so I decided to see what could be done to create a simple Blue-Green Deployment &quot;dashboard&quot; in an hour or so one night. It&apos;s not nearly as good as Christian&apos;s, but since it shows some of the features the library provides I decided to do a quick write-up about the code.

Here&apos;s an example of what my &quot;dashboard&quot; (if you want to call it that - yes, it&apos;s pretty basic) generates in the console.

```bash
Deployment              Role   Status   Image        Ports     Services                      
---------------------- ----- ------- ----------- -------- ----------------
nginx-deployment-blue   blue   running  nginx-blue   80, 9000  nginx-blue-test, nginx-service
nginx-deployment-green  green  running  nginx-green  9001      nginx-green-test
```

You can see that it lists information about the blue and green Deployments as well as their associated Services. There&apos;s nothing revolutionary about it at all, but that wasn&apos;t really the point of this exercise. I wanted to see how easy it would be to use the library to interact with Kubernetes and access information about different resources. Although it took a little searching to get started, once I knew the proper objects to use it was pretty straightforward.

Before going too far it&apos;s important to note that I ran this from a machine that had rights to access the Kubernetes API using the **kubectl** command-line tool.

## Creating the Initial Project

I started things off by creating a new **package.json** file using **npm init**. I then installed the following dependencies:

```bash
npm install typescript --save-dev
npm install @kubernetes/client-node
npm install easy-table
```

Next, I added a **tsconfig.json** file since I wanted to use TypeScript. I could have just as easily used JavaScript as well but since I&apos;m a big TypeScript fan I went that direction. Because this was a quick experiment I didn&apos;t fully leverage TypeScript, but I can easily add more type information in the future if I ever circle back to the project. It was nice to get great code help/intellisense in VS Code as I was hunting and pecking my way through the API.

Once the project was created I added a **dashboard.ts** file which is responsible for querying Kubernetes, finding specific Deployments and Services, and rendering the desired data to the console using the **easy-table** npm package.

## Using the k8s.KubeConfig() Function

Jumping to the **dashboard.ts** file, I got started by importing two packages:

```typescript
import * as k8s from &apos;@kubernetes/client-node&apos;;
import * as Table from &apos;easy-table&apos;;
```

The **@kubernetes/client-node** package is used to query Kubernetes resources and **easy-table** handles writing the retrieved data out to the console.

Next I needed to create a new **k8s.KubeConfig()** object that could be used to integrate with the Kubernetes API:

```typescript
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
```

The **KubeConfig** object provides a **makeApiClient()** function that can be used to perform queries. I needed to query the **k8s.AppsV1API** (it allows access to **app/v1** resources such as Deployments) and **k8s.CoreV1Api** (it allows access to core Kubernetes resources such as Services).

```typescript
const appsV1Api = kc.makeApiClient(k8s.AppsV1Api);
const coreV1Api = kc.makeApiClient(k8s.CoreV1Api);
const roles = [&apos;blue&apos;, &apos;green&apos;];
```

From there I needed to query the cluster&apos;s Deployments and Services using a **getDeployments()** function.

```typescript
getDeployments().catch(console.error.bind(console));
```

## Querying Deployments

Once the API client objects were created I used them to query the Deployments and their associated Services. First, a **getDeployments()** function was created. This function is responsible for retrieving Blue-Green Deployments and their associated Service information.

```typescript
async function getDeployments() {
  // get Deployments and Services
}
```

Here&apos;s how to query Deployments in the **default** namespace and look for Labels that have a **role** set to either **blue** or **green** using the **appsV1Api** object shown earlier:

```typescript
// get Deployments
const deploymentsRes = await appsV1Api.listNamespacedDeployment(&apos;default&apos;);
let deployments = [];
for (const deployment of deploymentsRes.body.items) {
    let role = deployment.spec.template.metadata.labels.role;
    if (role &amp;&amp; roles.includes(role)) {
        deployments.push({ 
            name: deployment.metadata.name, 
            role,
            status: deployment.status.conditions[0].status,
            image: deployment.spec.template.spec.containers[0].image,
            ports: [],
            services: []
        });
    }
}
```

Looking through this code you&apos;ll notice that I&apos;m able to access the Labels within the Deployment template, metadata about the Deployment, the status of the container, the container image, and more.

## Querying Services

Once a blue or green Deployment was found, I used it to find Kubernetes Services associated with the Deployment. This was done using the **coreV1Api** object shown earlier.

```typescript
// get Services
const servicesRes = await coreV1Api.listNamespacedService(&apos;default&apos;);
for (const service of servicesRes.body.items) {
    if (service.spec.selector &amp;&amp; service.spec.selector.role &amp;&amp; roles.includes(service.spec.selector.role)) {
        let filteredDeployments = deployments.filter(d =&gt; {
            return d.role === service.spec.selector.role;
        });
        if (filteredDeployments) {
            for (const d of filteredDeployments) {
                d.ports.push(service.spec.ports[0].port);
                d.services.push(service.metadata.name);
            }
        }
    }
}
```

This code queries the Services, looks to see if they&apos;re associated with a blue or green Deployment by looking at the service&apos;s selector, finds the associated Deployment object, and then updates it with the appropriate port and Service name information.

## Rendering Data

The final part of the code handles rendering the retrieved Deployment and Service data to the console using **easy-table**.

```typescript
renderTable(deployments);
```

Here&apos;s the code for the **renderTable()** function:

```typescript
function renderTable(data, showHeader = true) {
    const table = new Table();
    for (const d of data) {
        d.services.sort();
        d.ports.sort();
        table.cell(&apos;Deployment&apos;, d.name);
        table.cell(&apos;Role&apos;, d.role);
        table.cell(&apos;Status&apos;, d.status ? &apos;running&apos; : &apos;stopped&apos;);
        table.cell(&apos;Image&apos;, d.image);
        table.cell(&apos;Ports&apos;, d.ports.join(&apos;, &apos;));
        table.cell(&apos;Services&apos;, d.services.join(&apos;, &apos;))
        table.newRow();
    };
    table.sort([&apos;Role|asc&apos;]);
    if (showHeader) {
        console.log(table.toString());
    } else {
        console.log(table.print());
    }
}
```

You can find more information about **easy-table** at [https://github.com/eldargab/easy-table](https://github.com/eldargab/easy-table).

## Summary

Although this is a basic use case for the [JavaScript Kubernetes Client](https://github.com/kubernetes-client/javascript) library, it offers a lot of promise in more robust scenarios where Kubernetes resources need to be queried. There&apos;s a lot more that could be added to the code (better error handling for example), but hopefully it provides a nice starter if you&apos;re interested in querying your Kubernetes cluster using the JavaScript library.

[Additional libraries](https://github.com/kubernetes-client) are also available for Java, Go, Python, C#, and other languages/frameworks which is super nice. The full code shown in this post can be [found here](https://github.com/DanWahlin/DockerAndKubernetesCourseCode/tree/master/samples/blue-green/dashboard). The readme file provides information about how to build and run the project.</content:encoded></item><item><title>Enabling Metrics Server for Kubernetes on Docker Desktop</title><link>https://blog.codewithdan.com/enabling-metrics-server-for-kubernetes-on-docker-desktop/</link><guid isPermaLink="true">https://blog.codewithdan.com/enabling-metrics-server-for-kubernetes-on-docker-desktop/</guid><pubDate>Mon, 17 Feb 2020 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/enabling-metrics-server-for-kubernetes-on-docker-desktop/2019-03-10_15-40-57-1.webp)

Lately we&apos;ve been working on a new [Docker and Kubernetes](https://codewithdan.com/products/docker-kubernetes) instructor-led training class that we&apos;ll be running onsite at several companies this year. The class uses [Docker Desktop](https://www.docker.com/products/docker-desktop) and the [Kubernetes](https://kubernetes.io) features it provides for several of the chapters. We needed to get the local cluster students will use to match as closely as possible to a cloud-based Kubernetes cluster that would be found on Azure, AWS, or GCP. The class covers using AKS as well, but most of the lab exercises rely on Kubernetes in Docker Desktop so running key features like the dashboard and Metrics API was important.

The majority of the Kubernetes functionality on Docker Desktop works great out of the box. You can run standard **kubectl** commands, work with various **Service types** including LoadBalancer (it supports localhost), run **Deployments** (against a single Node), and more, but getting **kubectl top** commands to work was challenging.

It turns out that Metrics Server isn&apos;t installed by default with Docker Desktop. You do get it automatically if you install Kubernetes using **kube-up.sh**. To work around that, we installed **Metrics Server** by following the directions at [https://github.com/kubernetes-incubator/metrics-server#deployment](https://github.com/kubernetes-incubator/metrics-server#deployment), but running **kubectl top** commands resulted in &quot;no metrics available&quot; messages. Definitely frustrating.

By running **kubectl logs \[metrics-server-pod-name\] -n kube-system** we could see that the Pod/Container was there, but it looked like some unexpected issues were coming up.

After doing some research (translated: Google Fu), I came across a [Github issue](https://github.com/docker/for-mac/issues/2751#issuecomment-441833752) that seemed to solve the problem and enabled the **kubectl top** command to start reporting information about Nodes and Pods on Docker Desktop/Kubernetes. Here are the steps that fixed the issue.

## Enabling Metrics Server in Docker Desktop

1\. Clone or download the [Metrics Server project](https://github.com/kubernetes-incubator/metrics-server).

2\. Open the **deploy/kubernetes/metrics-server-deployment.yaml** file in an editor.

3\. Add the **\--kubelet-insecure-tls** argument into the existing **args** section. That section will look like the following once you&apos;re done:

```yaml
args:
  - --cert-dir=/tmp
  - --secure-port=4443
  - --kubelet-insecure-tls
```

NOTE: **DO NOT enable kubelet-insecure-tls on a cluster** **that will be accessed externally**. This is only being done for a local Docker Desktop cluster.

4\. Run the following command as shown on the [Metrics Server repo](https://github.com/kubernetes-incubator/metrics-server) to create the deployment, services, etc.

```bash
kubectl create -f deploy/kubernetes
```

5\. To see how things are going, first get the name of your Metrics Server Pod by running the following command:

```bash
kubectl get pods -n kube-system
```

6\. Now run the following command and the logs should show it starting up and the API being exposed successfully:

```bash
kubectl logs [metrics-server-pod-name] -n kube-system
```

7\. Give it a little time and you should now be able to run **kubectl top** commands!

![View of kubectl node command.](/images/blog/enabling-metrics-server-for-kubernetes-on-docker-desktop/2019-03-10_15-32-44.webp)

![](/images/blog/enabling-metrics-server-for-kubernetes-on-docker-desktop/2019-03-10_15-40-57.webp)

There are almost always multiple ways to accomplish the same goal so if you know of an alternate technique for getting Metrics Server going on Docker Desktop Kubernetes please leave a comment!

I&apos;m hoping that at some point this functionality will ship directly in Docker Desktop, but for now you have to install it to get it running.

[Discuss on Twitter](https://twitter.com/search?q=https%3A%2F%2Fblog.codewithdan.com%2Fenabling-metrics-server-for-kubernetes-on-docker-desktop%2F&amp;src=typd)</content:encoded></item><item><title>New Pluralsight Course: Creating Object-oriented TypeScript Code</title><link>https://blog.codewithdan.com/new-pluralsight-course-creating-object-oriented-typescript-code/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-pluralsight-course-creating-object-oriented-typescript-code/</guid><pubDate>Fri, 14 Feb 2020 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/new-pluralsight-course-creating-object-oriented-typescript-code/2020-02-29_14-13-17-1024x180.webp)](https://pluralsight.pxf.io/ZAVoz)

I’m excited to announce the release of another course on Pluralsight titled [Creating Object-oriented TypeScript Code](https://pluralsight.pxf.io/ZAVoz)! If you&apos;ve been wanting to learn more about ES2015 features as well as object-oriented features available in TypeScript then this this is the class for you!

Here are a few questions this course will help you answer:

- What techniques can be used to create objects in JavaScript and TypeScript?
- What is object-oriented programming?
- Can I work with classes?
- What are constructors?
- What are get/set properties?
- How do I define functions in classes?
- Is a static member the same as an instance member?
- Can classes be &quot;extended&quot;?
- What are abstract classes?
- What are the role of interfaces and how can I use them?
- When should I consider using classes, abstract classes, interfaces, polymorphism, inheritance, and more?
- Can you show me an example of these concepts being put to use in an application?
- And much more…

Here’s a summary of the course…

# [Creating Object-oriented TypeScript Code](https://pluralsight.pxf.io/ZAVoz)

TypeScript supports many different ways to define and create objects which can be confusing especially when you&apos;re new to the language. Should you use a constructor function. Object.create(), classes, a coding pattern, or some other technique when creating objects?

The Creating Object-oriented TypeScript Code course will show different ways to create objects while focusing on object-oriented programming (OOP) techniques that can be used to maximize reuse and enhance productivity. Throughout the course you&apos;ll learn about the core principles of object-oriented programming such as encapsulation, polymorphism, inheritance, and abstraction and see how they can be applied and used. You&apos;ll learn how to define and instantiate classes in TypeScript, understand what members can be added to a class and the role they play, learn how inheritance can be used to promote reuse, and learn about what an abstract class is and why you&apos;d use one.

You&apos;ll also learn about the role of interfaces and how they can be used to create code contracts that drive consistency across a set of objects and enable polymorphic behavior. When you&apos;re finished with this course you&apos;ll have the skills and knowledge needed to build robust object-oriented applications using the TypeScript language and understand when and why to apply object-oriented programming principles.

# Course Modules

1. Course Overview
2. Introduction to Object-oriented Programming in TypeScript
    - Overview
    - Introduction
    - The Role of Objects
    - Object Creation Techniques
    - Object-oriented Concepts
    - Summary
3. Classes and Objects
    - Introduction
    - The Role of Classes
    - Creating a Class
    - Adding Class Members
    - Creating a Class Instance
    - Constructors and Properties
    - Static Members
    - Summary
4. Inheritance and Abstraction
    - Introduction
    - The Role of Inheritance
    - Inheriting from a Class
    - The Role of Abstract Classes
    - Creating and Inheriting from an Abstract Class
    - Overriding Members
    - Summary
5. Interfaces and Polymorphism
    - Introduction
    - The Role of Interfaces
    - Creating an Interface
    - Using Interfaces
    - Interfaces, Classes, and Polymorphism
    - Summary
6. Putting It All Together
    - Putting It All Together
    - Reviewing the Code

I hope you enjoy [the course](https://pluralsight.pxf.io/ZAVoz) and gain new insights into the role object-oriented concepts and TypeScript features can play in your application code.</content:encoded></item><item><title>Observable Store - Now with Support for the Redux DevTools</title><link>https://blog.codewithdan.com/observable-store-now-with-support-for-the-redux-devtools/</link><guid isPermaLink="true">https://blog.codewithdan.com/observable-store-now-with-support-for-the-redux-devtools/</guid><pubDate>Mon, 20 Jan 2020 00:00:00 GMT</pubDate><content:encoded>[Observable Store](https://github.com/DanWahlin/Observable-Store) now supports extensions! These can be added when the application first loads by calling **ObservableStore.addExtension()**.

The first built-in extension adds [Redux DevTools](https://github.com/reduxjs/redux-devtools) integration into applications that use Observable Store. The extension can be found in the **@codewithdan/observable-store-extensions package**.

![](/images/blog/observable-store-now-with-support-for-the-redux-devtools/reduxDevTools-1024x624.webp)

If you&apos;re new to the Redux DevTools, they can be used to &quot;time travel&quot; through your application to see what happened at particular times. This feature is extremely useful when you&apos;re trying to track down a problem or simply want to see what state flows through your application. You can get more details on the various [Redux DevTools extension features here](https://github.com/reduxjs/redux-devtools#documentation).

Currently Observable Store extensions provides Redux DevTools support for [Angular](#Angular) and [React](#React) applications. Here&apos;s a walk-through of enabling support for both of these options.

## Integrating Angular with the Redux DevTools

The first thing you&apos;ll need to do is install the Observable Store extensions package. This assumes that you&apos;ve already installed the [Observable Store](https://github.com/DanWahlin/Observable-Store) package and have it added into your app.

```bash
npm install @codewithdan/observable-store-extensions
```

Once the package is installed, add the following into **main.ts** and ensure that you set **trackStateHistory** to **true**:

```typescript
import { ObservableStore } from &apos;@codewithdan/observable-store&apos;;
import { ReduxDevToolsExtension } from &apos;@codewithdan/observable-store-extensions&apos;;

...

ObservableStore.globalSettings = {  
    trackStateHistory: true
};
ObservableStore.addExtension(new ReduxDevToolsExtension());
```

Now install the [Redux DevTools Extension](https://chrome.google.com/webstore/detail/redux-devtools/lmhkpmbekcpmknklioeibfkpmmfibljd) in your browser, run your Angular application, and open the Redux DevTools extension. Your store state will now be shown and you can use the time travel functionality provided by Redux DevTools as well as the other functionality.

## Integrating React with the Redux DevTools

Get started by installing the Observable Store extensions package. This assumes that you&apos;ve already installed the [Observable Store](https://github.com/DanWahlin/Observable-Store) package and have it added into your app.

```bash
npm install @codewithdan/observable-store-extensions
```

Now add the **history** prop to your route (note that Observable Store works with **react-router**):

```jsx
import React from &apos;react&apos;;
import { Router, Route, Redirect } from &apos;react-router-dom&apos;;
import { createBrowserHistory } from &apos;history&apos;;|

export const history = createBrowserHistory();

...

const Routes = () =&gt; (
  &lt;Router history={history}&gt;
    &lt;div&gt;
       &lt;!-- Routes go here --&gt;
    &lt;/div&gt;
  &lt;/Router&gt;
);

export default Routes;
```

Add the following into **index.js** and ensure that you set **trackStateHistory** to **true** and pass the **history** object into the **ReduxDevToolsExtension** constructor as shown:

```jsx
import Routes, { history } from &apos;./Routes&apos;;
import { ObservableStore } from &apos;@codewithdan/observable-store&apos;;
import { ReduxDevToolsExtension } from &apos;@codewithdan/observable-store-extensions&apos;;

...

ObservableStore.globalSettings = {  
    trackStateHistory: true
};
ObservableStore.addExtension(new ReduxDevToolsExtension({ 
    reactRouterHistory: history 
}));

ReactDOM.render(&lt;Routes /&gt;, document.getElementById(&apos;root&apos;));
```

Now install the [Redux DevTools Extension](https://chrome.google.com/webstore/detail/redux-devtools/lmhkpmbekcpmknklioeibfkpmmfibljd) in your browser, run your React application, and open the Redux DevTools extension. Your store state will now be shown and you can use the time travel functionality provided by Redux DevTools as well as the other functionality.

## Summary

I&apos;ve found the addition of the Redux DevTools extension into my workflow to be very useful for watching how state changes in an application, jumping to a particular point in time to fix a bug, and for other scenarios. If you&apos;re already using Observable Store with Angular or React I hope you&apos;ll find the new functionality useful as well.</content:encoded></item><item><title>Using the Docker &amp;quot;before&amp;quot; Filter to Remove Multiple Images</title><link>https://blog.codewithdan.com/using-the-docker-before-filter-to-remove-multiple-images/</link><guid isPermaLink="true">https://blog.codewithdan.com/using-the-docker-before-filter-to-remove-multiple-images/</guid><pubDate>Tue, 14 Jan 2020 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/using-the-docker-before-filter-to-remove-multiple-images/docker_logo.webp)

I recently needed to cleanup a bunch of old Docker images on a VM that I run in Azure. While I could remove each image one by one using the standard **docker rmi \[IMAGE IDS\]** command, removing multiple images all at once as a batch was preferable.

It turns out that removing a specific range of images is fairly straightforward using the [**&quot;before&quot;** filter](https://docs.docker.com/engine/reference/commandline/images/#options). You can do the following to list all images that exist before a particular image:

```bash
docker images danwahlin/nginx-codelabs -f &quot;before=danwahlin/nginx-codelabs:1.15&quot;
```

Running the command will show all of the images that existed before the **danwahlin/nginx-codelabs:1.15** image (basically the older images). Once you run the command and confirm that the correct images are showing, you can remove all of them in a batch using the following command:

```bash
docker images danwahlin/nginx-codelabs -f &quot;before=danwahlin/nginx-codelabs:1.15&quot; -q | xargs docker rmi
```

Note that the **\-q** switch located toward the end of the command is used to only list the IDs for each image. These IDs are then piped into the **docker rmi** command and removed as a batch.

Nice and easy!

## Obligatory Disclaimer

It goes without saying that you&apos;ll want to use caution if you&apos;re doing something like this on a machine that is considered &quot;critical&quot;. Double and triple check that the correct image IDs are being returned using the first command above. The good news is images associated with an existing container will display an error and not be removed (you&apos;d have to force the removal using docker rmi -f). Plus, you can always pull missing images again from your registry. To sum it up - don&apos;t do something stupid that you&apos;ll regret later! :-)</content:encoded></item><item><title>Installing MongoDB on Mac Catalina using Homebrew</title><link>https://blog.codewithdan.com/installing-mongodb-on-mac-catalina-using-homebrew/</link><guid isPermaLink="true">https://blog.codewithdan.com/installing-mongodb-on-mac-catalina-using-homebrew/</guid><pubDate>Fri, 10 Jan 2020 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/installing-mongodb-on-mac-catalina-using-homebrew/mongodb.webp)

I recently bought a new iMac and moved all of my files over using Time Machine. The migration went really well overall and within a few hours I had my development machine up and running. After starting an application I&apos;m building I quickly realized that I couldn&apos;t get MongoDB to start. Running the following command resulted in an error about the **data/db** directory being read-only:

```bash
mongod --auth
```

I tried every **chmod** and **chown** command known to man and woman kind, tried manually changing security in Finder, compared security to my other iMac (they were the same), and tried a bunch of other things as well. But, try as I might I still saw the read-only folder error when trying to start the server....very frustrating. I found a lot of posts with the same issue but they all solved it by changing security on the folder. That wasn&apos;t the problem on my machine.

After doing more research I found out that Catalina added a new volume to the hard drive and creates a special folder where the MongoDB files need to go. The new folder is:

```bash
/System/Volumes/Data
```

The MongoDB files can then go at:

```bash
/System/Volumes/Data/data/db
```

I ran the following commands to install the latest version of MongoDB using [Homebrew](https://brew.sh/) (see [https://github.com/mongodb/homebrew-brew](https://github.com/mongodb/homebrew-brew) for more details):

```bash
brew tap mongodb/brew
brew install mongodb-community
```

I then went into the MongoDB config file at **/usr/local/etc/mongod.conf**. Note that it&apos;s possible yours may be located in a different location based on how you installed MongoDB. I changed the **dbPath** value to the following and copied my existing DB files into the folder:

```json
dbPath: /System/Volumes/Data/data/db
```

Finally, I made sure my account had the proper access to the folder by running **chown** (something I had tried many times earlier but on a folder outside of /System/Volumes/Data):

```bash
sudo chown -R $USER /System/Volumes/Data/data/db
```

After that I was able to start MongoDB and everything was back to normal. Hopefully this saves someone a few hours - I wasted way too much time on the issue. :-)</content:encoded></item><item><title>Observable Store 2.0 Released on npm!</title><link>https://blog.codewithdan.com/observable-store-2-0-released-on-npm/</link><guid isPermaLink="true">https://blog.codewithdan.com/observable-store-2-0-released-on-npm/</guid><pubDate>Mon, 14 Oct 2019 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/observable-store-2-0-released-on-npm/ObservableStore-1024x499.webp)

I&apos;m excited to announce the release of [Observable Store 2.0 on npm](https://www.npmjs.com/package/@codewithdan/observable-store)! You can get started using it with the standard **npm install** command:

```bash
npm install @codewithdan/observable-store
```

The [Github repository](https://github.com/danwahlin/observable-store) provides information about the specific steps to get started and how Observable Store can be used with various front-end projects.

## Why Observable Store?

Before jumping in to what&apos;s new in 2.0, let me give a quick overview of the [Observable Store](https://github.com/danwahlin/observable-store) project in case you&apos;re new to it. Several years ago I was working with a few large companies here in the United States who were building Angular and React apps. One in particular was hiring a lot of front-end developers and struggling with getting them up-to-speed with the library/framework being used. Adding in the complexity of some of the available state management options at the time (redux, etc.) really made it challenging for the new hires since it tended to add a lot of complexity and code. The existing state management solutions also locked you into a specific framework/library which wasn&apos;t desirable for some of the teams.

As I talked with various teams and worked on a large project within [my own company](https://codewithdan.com) as well, I wondered if there might be a more simple way to achieve the same overall goals that more complex state management solutions have. I came up with the following set of goals that I wanted Observable Store to satisfy. If you&apos;re familiar with other front-end state management options you&apos;ll probably recognize many of these.

- Keep it simple!
- Single source of truth for state
- Store state is immutable
- Provide state change notifications to any subscriber
- Track state change history
- Easy to understand with a minimal amount of code required to get started
- Works with any front-end project built with JavaScript or TypeScript (Angular, React, Vue, or anything else)

At the time I had been working a lot with [RxJS](https://rxjs.dev) and knew that it could help a lot with providing [state change notifications](https://blog.codewithdan.com/ng-conf-talk-mastering-the-subject-communication-options-in-rxjs/) (something that&apos;s key with state management) so I decided to make an initial attempt at creating a simple state store that other objects within an application could extend to instantly get state management functionality.

That project eventually grew into [Observable Store](https://github.com/danwahlin/observable-store) and my team ended up using it in a project we&apos;re working on that has a lot of complex state management needs. I&apos;ve already discussed Observable Store in a previous post so if you&apos;d like to learn more read [Simplifying Front-End State Management with Observable Store](https://blog.codewithdan.com/simplifying-front-end-state-management-with-observable-store/) or check out the project&apos;s [Github repository](https://github.com/danwahlin/observable-store).

## Observable Store 2.0 Features and Changes

The overall goal of Observable Store is to keep things simple and with the 2.0 release that still holds true. Here&apos;s a quick rundown on what&apos;s new.

#### Global Store Settings

The underlying API has stayed the same (no breaking API changes), but the way settings can be passed to the store is now more flexible. Observable Store 2.0 has a **globalSettings** option that allows settings to be defined once for the entire application. Previously settings were passed through a [service that extended Observable Store](https://github.com/danwahlin/observable-store#steps-to-use-observable-store). While this approach worked fine, if you had 10 services that extended Observable Store and wanted a particular settings to always be &quot;on&quot; (such as **trackStateHistory**) for all services, then you had to pass the setting 10 times. Now that setting can be set as the application initializes using the **globalSettings** property:

```typescript
ObservableStore.globalSettings = { ...settings go here };
```

Any global settings can be overridden by a service that extends **ObservableStore** as well giving you complete control over when a setting may need to be changed. For a complete list of settings visit the Github repository:

- [General Store Settings](https://github.com/danwahlin/observable-store#settings)
- [Global Store Settings](https://github.com/danwahlin/observable-store#global-store-settings)

#### Enhanced Type Support

Behind the scenes Observable Store also adds additional type support using TypeScript generics. Previously, the **stateChanged** observable was typed as **Observable&lt;any&gt;** since a slice of state could be returned as opposed to the entire state of the store. With Observable Store 2.0 (based on user feedback) I&apos;ve decided to change **stateChanged** to **Observable&lt;T&gt;** where **T** is an interface or class describing the store state. By using **Observable&lt;T&gt;** users get better intellisense/code help when subscribing to **stateChange**. If they do return a slice of the state they can define a different type in the subscription parameter signature to handle the casting. Thanks to [GustavoCostaW](https://github.com/DanWahlin/Observable-Store/issues/39) for the suggestion.

#### RxJS as a Peer Dependency

In Observable Store 1.x RxJS was a required dependency causing the RxJS module to be installed under **@codewithdan/observable-store** in **node\_modules**. This worked fine until RxJS was installed somewhere else in **node\_modules** (see my post on solving the issue of having [multiple RxJS installations in node\_modules here](https://blog.codewithdan.com/rxjs-error-types-of-property-source-are-incompatible/)). With Observable Store 2.0 RxJS is now a peer dependency which means users will have to install it themselves. The docs have been updated to mention this change.

#### Cloning State

Observable Store 2.0 clones state to ensure immutability within the store and that state change detection works correctly across different front-end libraries. While this works great and serves its purpose, it&apos;s actually only needed during development. Once code is working properly and an application is built for production, some of the cloning can be disabled to enhance performance which is especially important with applications that store a large amount of data in the store.

The new **globalSettings** property now provides an **[isProduction](https://github.com/danwahlin/observable-store#the-isproduction-property)** setting. When **isProduction** is false (such as during development) then state cloning will be used as state enters and leaves the store. When **isProduction** is true, state cloning will be minimized to enhance performance. Thanks to [Mike Ryan](https://github.com/MikeRyanDev) from the [NgRx team](https://github.com/ngrx) for sharing this tip with me.

#### Compiling with CommonJS

Observable Store 1.0 used ES2015 modules for the TypeScript compilation. This worked great as long as application supported ES2015 module syntax but didn&apos;t work so great if a project or package didn&apos;t support that. An issue was reported by [roger-gl](https://github.com/DanWahlin/Observable-Store/issues/38) about Observable Store not working with Jest out of the box. After some digging into the issue I found that this was something that could be configured to work in Jest, but it was just as easy for me to make this work automatically by switching the Observable Store TypeScript compilation to use CommonJS. That is now enabled in 2.0 to make it work out of the box in scenarios that don&apos;t necessarily support ES2015 module syntax natively.

#### General Refactoring and Additional Unit Tests

The final major change is completely behind the scenes. All of the base store functionality, global settings, etc. have been moved into an **[ObservableStoreBase](https://github.com/DanWahlin/Observable-Store/blob/master/src/observable-store-base.ts)** class which acts as a singleton to store state. This is a minor change but provides an easier way to see what&apos;s created per service in an application versus created only once for an application.

Finally, additional [unit tests](http://tore/blob/master/src/observable-store.spec.ts) have been added to cover more scenarios. The number of tests will continue to grow over time.

#### Redux DevTools Support

Observable Store state can now be viewed using the Redux DevTools! If you&apos;ve used these tools with Redux, NgRx, or another option, you can now access the same functionality. Check-out [this blog post on the topic](/observable-store-now-with-support-for-the-redux-devtools/).

![](/images/blog/observable-store-2-0-released-on-npm/reduxDevTools-1024x624.webp)

## Summary

That&apos;s a wrap on the new features in Observable Store 2.0. There&apos;s nothing &quot;earth shattering&quot; included and I&apos;m actually very happy about that since the overall goal of the project is to keep things simple and the project small. If you haven&apos;t tried it out, check out the [documentation](https://github.com/DanWahlin/Observable-Store) and [sample projects](https://github.com/DanWahlin/Observable-Store/tree/master/samples) in the repository and you&apos;ll see how easy it is to add state management into your application without adding a lot of code and complexity.

Thanks to everyone who has contributed code, documentation, and provided details about issues that have come up! I really appreciate it!

- ElAndyG - [elAndyG](https://github.com/elAndyG)
- Mickey Puri - [mickypuri](https://github.com/mickeypuri)
- Bob - [crowcoder](https://github.com/crowcoder)
- Adam L Barrett - [BigAB](https://github.com/BigAB)
- Gustavo Costa - [GustavoCostaW](https://github.com/GustavoCostaW)
- Brandon Roberts - [brandonroberts](https://github.com/brandonroberts)</content:encoded></item><item><title>New Pluralsight Course - Kubernetes for Developers: Core Concepts</title><link>https://blog.codewithdan.com/new-pluralsight-course-kubernetes-for-developers-core-concepts/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-pluralsight-course-kubernetes-for-developers-core-concepts/</guid><pubDate>Sun, 13 Oct 2019 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/new-pluralsight-course-kubernetes-for-developers-core-concepts/2019-10-13_11-12-05-1-1024x236.webp)](https://pluralsight.pxf.io/R9W2N)

I’m excited to announce the release of my next course on Pluralsight titled [Kubernetes for Developers: Core Concepts](https://pluralsight.pxf.io/R9W2N)! Since creating the [Docker for Web Developers](https://pluralsight.pxf.io/Nqm2V) course I&apos;ve been wanting to create a course that takes developers to the &quot;next level&quot; with containers and the new Kubernetes for Developers course does that!

Here are a few questions this course will help you answer:

- Why should I learn about Kubernetes if I&apos;m a developer?
- What are the core concepts that I should know about to get started?
- How can I get Kubernetes running on my development machine?
- What is the role of a Pod in Kubernetes and how does it relate to a container?
- What is YAML and how can I use it to define and create Kubernetes resources?
- What is the role of the Kubernetes kubectl command and what are some of the key commands to know about?
- What&apos;s the difference between a ReplicaSet and a Deployment?
- How do I create a Deployment in YAML and get it running in Kubernetes using kubectl?
- How can I deploy a new version of the application without impacting users at all?
- What is the role of Services in Kubernetes and how can I create and deploy Services?
- What storage options does Kubernetes have and how can I use them in Pods and containers?
- How can I pass configuration settings to containers running in Kubernetes?
- How can I pass secrets (sensitive data) to containers running in Kubernetes?
- What troubleshooting techniques can I use if I encounter problems with containers running in Kubernetes?
- Can you show me how to put everything together to get an application up and running in Kubernetes?
- And much more…

Here’s a summary of the course…

# [Kubernetes for Developers: Core Concepts](https://pluralsight.pxf.io/R9W2N)

The Kubernetes for Developers: Core Concepts course provides a developer-focused look at the role Kubernetes can play in the development workflow. If you need to get your application containers into Kubernetes then this course will help jumpstart that process.

Learn how to get Kubernetes up and running locally on your machine, interact with Kubernetes using kubectl, use different resources it provides, deploy containers within Pods, work with Deployments, expose a Pod with a Service, understand the role of storage, ConfigMaps, and Secrets, troubleshoot Pods, and more. By the end of the course you&apos;ll understand the role Kubernetes can play in your development workflow and how it can be used to orchestrate and manage your containers. If you&apos;re looking to get started learning about the core concepts of Kubernetes then this course is for you!

When you&apos;re finished with this course, you&apos;ll have the skills and knowledge needed to get application containers up and running inside of a Kubernetes cluster.

# Course Modules

1. Course Overview
2. Kubernetes from a Developer Perspective
    - Overview
    - Introduction
    - Kubernetes Overview
    - The Big Picture
    - Benefits and Use Cases
    - Running Kubernetes Locally
    - Getting Started with kubectl
    - Web UI Dashboard
    - Summary
3. Creating Pods
    - Introduction
    - Pod Core Concepts
    - Creating a Pod
    - kubectl and Pods
    - YAML Fundamentals
    - Defining a Pod with YAML
    - kubectl and YAML
    - Pod Health
    - Pod Health in Action
    - Summary
4. Creating Deployments
    - Introduction
    - Deployments Core Concepts
    - Creating a Deployment
    - kubectl and Deployments
    - kubectl Deployments in Action
    - Deployment Options
    - Zero Downtime Deployments in Action
    - Summary
5. Creating Services
    - Introduction
    - Services Core Concepts
    - Service Types
    - Creating a Service with kubectl
    - Creating a Service with YAML
    - kubectl and Services
    - kubectl Services in Action
    - Summary
6. Understanding Storage Options
    - Introduction
    - Storage Core Concepts
    - Volumes
    - Volumes in Action
    - PersistentVolumes and PersistentVolumeClaims
    - PersistentVolume and PersistentVolumeClaim YAML
    - StorageClasses
    - PersistentVolumes in Action
    - Summary
7. Creating ConfigMaps and Secrets
    - Introduction
    - ConfigMaps Core Concepts
    - Creating a ConfigMap
    - Using a ConfigMap
    - ConfigMaps in Action
    - Secrets Core Concepts
    - Creating a Secret
    - Using a Secret
    - Secrets in Action
    - Summary
8. Putting it All Together
    - Introduction
    - Application Overview
    - YAML Manifests
    - Running the Application
    - Troubleshooting Techniques
    - Troubleshooting Techniques in Action
    - Summary
9. Course Summary

I hope you enjoy [the course](https://pluralsight.pxf.io/R9W2N) and gain new insights into the role Kubernetes can play in your development workflow and in the application deployment process.

[Discuss on Twitter](https://twitter.com/search?q=https%3A%2F%2Fblog.codewithdan.com%2Fnew-pluralsight-course-kubernetes-for-developers-core-concepts%2F&amp;src=typeahead_click)</content:encoded></item><item><title>RxJS Error: &quot;Types of property &apos;source&apos; are incompatible&quot; and How to Fix It</title><link>https://blog.codewithdan.com/rxjs-error-types-of-property-source-are-incompatible/</link><guid isPermaLink="true">https://blog.codewithdan.com/rxjs-error-types-of-property-source-are-incompatible/</guid><pubDate>Sat, 12 Oct 2019 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/rxjs-error-types-of-property-source-are-incompatible/wrenches-1024x768.webp)

I&apos;m working on an npm package that requires RxJS as a peerDependency which means that whatever app uses the package must also install RxJS. Since my npm package project needs RxJS to build, I add it as a **devDependency** which of course adds it into the node\_modules folder of the project.

To use my npm package locally in sample apps I have I do the following which is a nice trick to avoid having to publish the package to npm (which I don&apos;t want to do when I&apos;m still working on it):

1. Run **npm link \[package-name\]** where package-name is name of the project folder (which will normally be your package name - if not adjust the package-name as appropriate).
2. Run **npm link \[@yourNpmOrganization\]/package-name\]** in the sample app that needs to reference the package project. If you don&apos;t use an npm organization just leave that part out and put the package-name value.
3. When you&apos;re done you can run **npm unlink \[@yourNpmOrganization\]/package-name\]** to unlink the app from the local package.

Doing this works great and I can update my package project and have it immediately affect the target application due to the npm linking. That saves me having to publish to npm everytime I update the project which is ideal for local testing.

After doing the linking everything was working great until one of the sample apps that I use to test the package project threw an RxJS error similar to the following:

&gt; Types of property &apos;source&apos; are incompatible

I&apos;ve seen this error before and knew it was due to having a copy of RxJS (as a devDependency) in the package project&apos;s node\_modules folder and a copy of RxJS in the sample app&apos;s node\_modules folder. Having these two copies in the node\_modules folder causes issues even if the two RxJS copies are the same version.

While I could temporarily delete RxJS from the package project to get around the error that would defeat the purpose of linking since I needed RxJS in the project to build it. So, what to do?

The best solution I&apos;ve found so far is to add the following configuration **paths** property into the sample app&apos;s **tsconfig.json** file (since it&apos;s a TypeScript project in this case):

```json
{
  &quot;compilerOptions&quot;: {
    &quot;baseUrl&quot;: &quot;./&quot;,
    &quot;outDir&quot;: &quot;./dist&quot;,
    &quot;sourceMap&quot;: true,
    &quot;declaration&quot;: false,
    &quot;module&quot;: &quot;esnext&quot;,
    &quot;moduleResolution&quot;: &quot;node&quot;,
    &quot;emitDecoratorMetadata&quot;: true,
    &quot;experimentalDecorators&quot;: true,
    &quot;target&quot;: &quot;es2015&quot;,
    &quot;typeRoots&quot;: [
      &quot;node_modules/@types&quot;
    ],
    &quot;lib&quot;: [
      &quot;es2017&quot;,
      &quot;dom&quot;
    ],
    &quot;paths&quot;: {
        &quot;rxjs&quot;: [
          &quot;node_modules/rxjs&quot;
        ],
        &quot;rxjs/*&quot;: [
          &quot;node_modules/rxjs/*&quot;
        ]
    }
  }
}
```

This forces the sample project to always use the root copy of RxJS in **node\_modules** and to ignore any others found nested in additional packages. While I wish there was another way to work around the issue, this approach gets the job done.</content:encoded></item><item><title>Debugging jasmine-ts Unit Tests in VS Code</title><link>https://blog.codewithdan.com/debugging-jasmine-ts-unit-tests-in-vs-code/</link><guid isPermaLink="true">https://blog.codewithdan.com/debugging-jasmine-ts-unit-tests-in-vs-code/</guid><pubDate>Fri, 11 Oct 2019 00:00:00 GMT</pubDate><content:encoded>&lt;figure&gt;

![](/images/blog/debugging-jasmine-ts-unit-tests-in-vs-code/debugging-1024x594.webp)

&lt;figcaption&gt;

Image created by Mohamed Hassan

&lt;/figcaption&gt;

&lt;/figure&gt;

I&apos;m currently working on a project that relies on [jasmine-ts](https://www.npmjs.com/package/jasmine-ts) to run unit tests. While it&apos;s been working great, I encountered a bug in a unit test that required a lot more than a simple console.log() statement to figure out. I needed real debugging!

Since my unit tests were running and providing output directly to the console, the question became, &quot;How do you attach to a jasmine-ts unit test in VS Code?&quot;. I found a few StackOverflow posts and finally went with something [mentioned here](https://stackoverflow.com/questions/50204143/debug-jasmine-tests-written-in-typescript-node-in-vs-code) (shout-out to **isaacfi** for providing the answer that actually worked).

To debug a jasmine-ts unit test spec directly in VS Code, add a new debug configuration (click the gear icon in the debug pane), and add the following into the launch.json file:

```json
&quot;configurations&quot;: [
    {
      &quot;type&quot;: &quot;node&quot;, 
      &quot;request&quot;: &quot;launch&quot;, 
      &quot;name&quot;: &quot;Jasmine Current File&quot;, 
      &quot;program&quot;: &quot;${workspaceFolder}/node_modules/jasmine-ts/lib/index&quot;,
      &quot;args&quot;: [&quot;--config=./spec/jasmine.json&quot;, &quot;${file}&quot;],
      &quot;console&quot;: &quot;integratedTerminal&quot;,
      &quot;internalConsoleOptions&quot;: &quot;neverOpen&quot;
    }
]
```

If my scenario I had my jasmine.json file in a &quot;spec&quot; subfolder so you may need to change that path for your setup. Once the launch.json file is in place you can open the target spec file, set a breakpoint, and start debugging away! An example of a project where I&apos;m using this can be [found here](https://github.com/DanWahlin/Observable-Store).</content:encoded></item><item><title>Angular Architecture Concepts - ngVikings Keynote</title><link>https://blog.codewithdan.com/angular-architecture-concepts-ngvikings-keynote/</link><guid isPermaLink="true">https://blog.codewithdan.com/angular-architecture-concepts-ngvikings-keynote/</guid><pubDate>Thu, 06 Jun 2019 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/angular-architecture-concepts-ngvikings-keynote/2019-06-06_08-55-35-1024x544.webp)

The [ngVikings](https://ngvikings.org/) conference was held in Copenhagen, Denmark this year and I had a great time speaking at it and talking with the people that attended. One of the more fun aspects of attending any conference is listening to and learning from what others are doing, hearing about problems they&apos;re trying to solve, helping out where possible, and making new friends along the way.

I had the opportunity to give one of the keynote talks this year and focused on a topic that is near and dear to me - _architecture_. Some companies seem to have the viewpoint that because a front-end application is &quot;JavaScript&quot; (something I mention at the beginning of the talk), that not a lot of planning for the application needs to happen. That&apos;s of course incorrect as architecture and overall application planning is still very critical in this area just as it would be for a traditional web application or desktop application.

I had to fight through these guys to get to the stage, but after some negotiating they let me pass. :-)

![](/images/blog/angular-architecture-concepts-ngvikings-keynote/2019-05-28-09.08.33-vikings-1024x768.webp)

Topics covered in the talk include:

- Application Planning
- Module Organization
- Structuring Components
- Component Communication
- State Management

## Angular Architecture Concepts

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/BS5G7Pqgqck&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/p&gt; &lt;p&gt;&lt;/iframe&gt;

Additional resources from the talk can be found below:

- [Angular Architecture Project](https://github.com/DanWahlin/angular-architecture)
- [Angular JumpStart Project](https://github.com/DanWahlin/Angular-JumpStart)
- [Slides](https://docs.google.com/presentation/d/1zayHhGRCe3bdPu9ZRxc54TXFxvZYyvoMmqIoR6c8Z60/edit?usp=sharing)

If you&apos;d like even more details on this topic check out [Angular Architecture and Best Practices](https://blog.codewithdan.com/new-pluralsight-course-angular-architecture-and-best-practices/) course on Pluralsight. It has a lot more detail about the concepts covered in this talk (as well as many others). My company also offers an onsite instructor-led training course covering [](https://codewithdan.com/products/docker-kubernetes)[Angular Architecture](https://codewithdan.com/products/angular-architecture) as well for teams.</content:encoded></item><item><title>Deploying Your Angular Apps (using containers) - ngVikings Talk</title><link>https://blog.codewithdan.com/deploying-your-angular-apps-using-containers-ngvikings-talk/</link><guid isPermaLink="true">https://blog.codewithdan.com/deploying-your-angular-apps-using-containers-ngvikings-talk/</guid><pubDate>Thu, 06 Jun 2019 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/deploying-your-angular-apps-using-containers-ngvikings-talk/2019-06-06_08-43-25-1024x525.webp)

I had the opportunity to attend and speak at [ngVikings](https://ngvikings.org/) this year in Copenhagen, Denmark which was a lot of fun. Copenhagen is a beautiful city and the conference organizers did a great job putting the event together. One of the talks I gave at the conference covered deploying Angular applications using containers. While the focus was on Angular and any services it may call, the concepts can really be applied to any front-end back-end application or service.

Topics covered in the talk include:

- Deployment Challenges
- What is Docker?
- Images and Containers
- Orchestration with Docker Compose
- Orchestration with Kubernetes (introductory look)

## Deploying Your Angular Apps (using containers)

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/IDDCXsDwuPo&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;p&gt;&lt;/p&gt;&lt;/iframe&gt;

Additional resources from the talk can be found below:

- [Code project](https://github.com/DanWahlin/Angular-Core-Concepts)
- [Slides](https://docs.google.com/presentation/d/1CMZbzzyDIPJXSfjl9-s9ms4HL6dMfnxWv-Lot8fE4Mo/edit?usp=sharing)

If you&apos;d like even more details on this topic check out my [Containerizing Angular Applications with Docker](https://blog.codewithdan.com/new-pluralsight-course-containerizing-angular-applications-with-docker/) or [Docker for Web Developers](https://blog.codewithdan.com/docker-for-web-developers-now-with-kubernetes/) courses on Pluralsight. My company also offers an onsite instructor-led training course covering [Docker and Kubernetes](https://codewithdan.com/products/docker-kubernetes) as well for teams.</content:encoded></item><item><title>ng-conf Talk: Mastering the Subject - Communication Options in RxJS</title><link>https://blog.codewithdan.com/ng-conf-talk-mastering-the-subject-communication-options-in-rxjs/</link><guid isPermaLink="true">https://blog.codewithdan.com/ng-conf-talk-mastering-the-subject-communication-options-in-rxjs/</guid><pubDate>Wed, 08 May 2019 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/ng-conf-talk-mastering-the-subject-communication-options-in-rxjs/2019-05-08_23-01-01-1024x574.webp)

If you ever get a chance to attend the **[ng-conf conference](https://ng-conf.org)** in Salt Lake City, Utah I highly recommend it. It&apos;s one of my favorite conferences to attend and speak at due to the great content, huge community of developers, and many fun events throughout the week. The conference organizers do a great job putting on the event.

This year I had the opportunity to do a 2-day workshop with my good friend **[John Papa](https://twitter.com/john_papa)** on Angular Architecture concepts. We had a very interactive group of nearly 200 people in the workshop and enjoyed sharing project battle stories, best practices, and techniques that should be considered when planning and building Angular applications. It was a fun 2-day event and I&apos;m already looking forward to next year&apos;s workshop.

In addition to the workshop, I also gave a talk titled **[Mastering the Subject - Communication Options in RxJS](https://www.youtube.com/watch?v=_q-HL9YX_pk)**. The talk introduced different [**RxJS**](https://rxjs.dev/) subject options such as **Subject**, **BehaviorSubject**, **ReplaySubject**, and **AsyncSubject** and discussed patterns to communicate between different components in an application and even between services. I focused on **Event Buses**, **Observable Services**, and briefly mentioned my **[Observable Store](https://github.com/DanWahlin/Observable-Store)** state management solution (which provides a simple yet powerful way to add state management into any type of front-end project - Angular/React/Vue or anything else). You can watch the talk below.

## Mastering the Subject - Communication Options in RxJS

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/_q-HL9YX_pk&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/p&gt; &lt;p&gt;&lt;/iframe&gt;

The code for the various topics covered can be [found here](https://github.com/danwahlin/angular-architecture). This code is part of our [Angular Architecture training course](https://codewithdan.com/products/angular-architecture) and also used in my [Angular Architecture and Best Practices](https://blog.codewithdan.com/new-pluralsight-course-angular-architecture-and-best-practices/) video course on Pluralsight.</content:encoded></item><item><title>Docker for Web Developers - Now with Kubernetes!</title><link>https://blog.codewithdan.com/docker-for-web-developers-now-with-kubernetes/</link><guid isPermaLink="true">https://blog.codewithdan.com/docker-for-web-developers-now-with-kubernetes/</guid><pubDate>Fri, 26 Apr 2019 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/docker-for-web-developers-now-with-kubernetes/container-ship-1024x690.webp)

Over the past year I&apos;ve done several big updates to my [Docker for Web Developers](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/docker-web-development) course on Pluralsight that I wanted to mention. First, all of the code samples have been updated and Docker Desktop (formerly called Community Edition) is now covered in addition to Docker Toolbox.

The biggest update came when I added a new module into the course titled &quot;Moving to Kubernetes&quot;. This module provides an overview of [Kubernetes](https://kubernetes.io) and what it is (a very exciting technology!), examples of using key Kubernetes commands, and an example of moving the Docker Compose orchestrated application shown in the course to Kubernetes. I hope you&apos;ll check out the new chapter if you&apos;ve already viewed the course in the past.

If you&apos;re new to the course, here are more details about how it came about, and what it includes.

## Docker for Web Developers

I’ve been using Docker for many years now and am still super excited about the benefits it offers software developers. In fact, I was so excited about the features that I decided to create a full video course on [Pluralsight.com](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/docker-web-development). The course has over 5 hours of in-depth information about why and how you’d use Docker in your Web development environment. Here’s a small sampling of some of the topics covered:

- Why use Docker as a Developer?
- Benefits that containers offer developers
- The difference between Docker Images and Virtual Machines
- Installing Docker Desktop or Docker Toolbox on Mac and Windows
- The role of Docker Hub for pulling images
- Key Docker Client commands
- How do you hook your source code into Docker?
- How do you build custom Docker images?
- Creating and using custom Dockerfiles and images
- How do multiple containers talk to each other at runtime?
- Bring up a complete development or production environment with Docker Compose
- Push custom images to Docker Hub
- Introduction to Kubernetes
- Moving from Docker Compose to Kubernetes
- Much more!

Here’s the official course table of contents…

## Docker for Web Developers Course Outline

![](/images/blog/docker-for-web-developers-now-with-kubernetes/2019-04-26_14-47-00.webp)

Docker&apos;s open app-building platform can give any web developer a boost in productivity. You&apos;ll learn how to use the Docker, how to work with images and containers, how to orchestrate containers, how to work with volumes, and much more. [View the course here.](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/docker-web-development)

1. Course Overview
2. Why Use Docker as a Developer?
3. Setting Up Your Dev Environment with Docker Toolbox
4. Using Docker Machine and Docker Client
5. Hooking Your Source Code into a Container
6. Building Custom Images with Dockerfile
7. Communicating Between Docker Containers
8. Managing Containers with Docker Compose
9. Running Your Containers in the Cloud
10. Moving to Kubernetes

[Get to the full course here!](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/docker-web-development)</content:encoded></item><item><title>Data-Oriented vs. Control-Oriented Programming</title><link>https://blog.codewithdan.com/data-oriented-vs-control-oriented-programming/</link><guid isPermaLink="true">https://blog.codewithdan.com/data-oriented-vs-control-oriented-programming/</guid><pubDate>Wed, 24 Apr 2019 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/data-oriented-vs-control-oriented-programming/framework-1024x682.webp)

I recently had someone ask me a question on Twitter about moving to Single Page Application frameworks and why they&apos;d do that over choosing vanilla JavaScript or jQuery. It&apos;s a great question but tough to question on Twitter with the limited amount of characters. So, I thought I&apos;d dust off an old post I wrote many years ago to address the subject. I also wrote a post titled [Choosing the &quot;Right&quot; JavaScript Library/Framework for Your Application](https://blog.codewithdan.com/choosing-the-right-javascript-library-framework-for-your-application/) that has some additional ideas to consider as well.

Any type of front-end app can be built using vanilla JavaScript, jQuery, or a Single Page Application (SPA) framework or library. Let&apos;s face it - the end user really doesn&apos;t care what option we choose as developers. When is the last time one of your end users asked, &quot;Hey Michelle - what type of coding platform did you use for that app? Tell me more about the application architecture too!&quot;.

Control-oriented versus data-oriented programming really comes down to how much you want to focus on custom coding versus solving business issues in a given application. With the vanilla JavaScript approach you have &quot;raw&quot; access to the DOM, but you also have to write everything yourself including things like routing, data binding, HTTP calls to the server, and more. With jQuery you get a simplified way to access the DOM, handle events, make HTTP calls, etc. but you still have to write a lot of custom code to handle other scenarios and get data in and out of screens. If you want data binding support (more on this in a moment) you can choose one of many libraries out there. One example of a data binding library is [KnockoutJS](https://knockoutjs.com/). Its been around for quite awhile, but it can significantly reduce the amount of code you write to get data in and out of your screens. Finally, there are Single Page Application (SPA) libraries and frameworks that provide a lot of functionality out of the box especially when it comes to working with data.

I prefer to go with a well established and well supported SPA library/framework whether it&apos;s Angular, React, Vue.js or something else. Having written many front-end applications over the years I&apos;ve come to realize that taking the time to write (and support) custom code to do what many SPA libraries/framework do out of the box isn&apos;t worth my time and effort. That&apos;s a very subjective opinion of course, but I&apos;m confident that a lot of front-end developers out there agree with that sentiment. Plus, I can&apos;t possibly keep up with all of the different security challenges that come up and ensure that a custom framework/library I write from scratch accounts for any new hacks.

For me it really boils down to if you want to write all of the code to interact with controls on your screen or if you want to allow data to change what displays on the screen. I call it &quot;Data-Oriented vs. Control-Oriented Programming&quot;.

## Control-Oriented Programming

Data binding is a key aspect of client-centric programming that can significantly minimize the amount of code written, simplify maintenance, and ultimately reduce the number of bugs that crop up in an application. Without data binding you have to locate each control in a page with code and then assign or extract a value to/from it – something I call “control-oriented” programming. Here’s a more nasty example of control-oriented programming (a lot of potential refactoring could be applied with this old code, but notice how many controls are accessed):

```
function loadApprovalDiv()
{
    var subTotal = parseFloat($(&apos;#SubTotal&apos;).text());
    $(&apos;#ClientSubTotal&apos;).val(subTotal.toFixed(2));
    var salesTaxRate = parseFloat($(&apos;#SalesTaxRate&apos;).val()) / 100;
    var salesTaxAmount = (subTotal * salesTaxRate) * .9;
    var deliveryFee = parseFloat($(&apos;#DeliveryFee&apos;).val());
    var adminFee = ((subTotal + salesTaxAmount + deliveryFee) * .05);
    var total = (Round(subTotal) + Round(salesTaxAmount) +  
      Round(deliveryFee) + Round(adminFee));
    $(&apos;#ClientTotal&apos;).val(total);
    var deliveryAddress = $(&apos;#Delivery_Street&apos;).val();
    //See if they entered a suite
    if ($(&apos;#Delivery_Suite&apos;).val() != &apos;&apos;) {
      deliveryAddress += &apos;, Suite &apos; + $(&apos;#Delivery_Suite&apos;).val();
    }
    deliveryAddress += &apos; &apos; + $(&apos;#Delivery_City&apos;).val() + &apos; &apos; + 
       $(&apos;#Delivery_StateID option:selected&apos;).text() + &apos; &apos; +
       $(&apos;#Delivery_Zip&apos;).val();
   
    var data = {
      finalSubTotal  : subTotal.toFixed(2),
      finalSalesTax  : salesTaxAmount.toFixed(2),
      finalTotal     : total.toFixed(2),
      deliveryFee    : deliveryFee.toFixed(2),
      adminFee       : adminFee.toFixed(2),
      deliveryName   : $(&apos;#Delivery_Name&apos;).val(),
      deliveryAddress: deliveryAddress,
      deliveryDate   : $(&apos;#Delivery_DeliveryDate&apos;).val(),
      deliveryTime   : $(&apos;#Delivery_DeliveryTime option:selected&apos;)
                         .text(),
      mainItems      : generateJson(&apos;Main&apos;),
      accessoryItems : generateJson(&apos;Accessory&apos;)
    };
    $(&apos;#OrderSummaryOutput&apos;).html(
      $(&apos;#OrderSummaryTemplate&apos;).render(data)
    );
}
```

Looking through the code you can see that a lot of it is dedicated to finding controls in the page and extracting their values. This works absolutely fine – after all, many applications take this approach. However, when an application is focused on controls and not on data a lot of extra code and plumbing ends up being written which complicates things if control IDs are changed, new controls are added, or existing controls are removed. If you only have a few controls that’s not a big deal, but as the number of controls grows it becomes problematic. [The cheese](https://en.wikipedia.org/wiki/Who_Moved_My_Cheese%3F) has definitely moved when it comes to client-side programming and the smart money is on building data-oriented applications rather than control-oriented applications like the one above.

## Data-Oriented Programming

I refer to applications that use data binding as being “data-oriented” since they’re focused on the actual data as opposed to writing code to access controls in a given page (“control-oriented” as mentioned earlier). I’ve built a lot of control-oriented applications over the years and found that making the transition to building data-oriented applications definitely requires a different thought process. However, making the move to building data-oriented applications is well worth the effort and ultimately results in better applications in my experience. I think it’s especially important for front-end applications built using JavaScript.

If you&apos;re already using a SPA framework/library or data binding library then I&apos;m &quot;preaching to the choir&quot; since you already get the value of data-oriented programming. However, if you&apos;re new to this front-end world then data-oriented programming is something I&apos;d highly recommend you consider and look into more.

Here are a few (very simple) examples of what I mean if you&apos;re new to the concept:

#### Knockout.js

```
&lt;input data-bind=&quot;value: customer.firstName&quot; /&gt;
```

#### **Angular:**

```
&lt;input [(ngModel)]=&quot;customer.firstName&quot; /&gt;
```

#### React:

```
&lt;input type=&quot;text&quot; value={this.state.customer.firstName} 
  onChange={this.handleChange} /&gt;
```

#### Vue.js

```
&lt;input v-model=&quot;customer.firstName&quot; /&gt;
```

These different code examples will automatically handle updating the target property (**firstName** in this case) as the textbox changes. Notice how no **id** is required at all on each input and NO CODE exists to go find the input. Now imagine a form that has many textboxes, textareas, radiobuttons, checkboxes, etc. and you can see how the data-oriented approach leads to much cleaner code. Plus, you can use this same approach to let your data drive showing and hiding parts of a screen, handle events, and perform many other productive tasks. Simply flip a boolean property from false to true, and with the right data bindings in place magic just happens.

By using a data-oriented library/framework you can wire JavaScript object properties to controls and have the controls and object properties update automatically (a process referred to as “two-way” binding) as changes are made on either side. This means that you don’t have to write selectors to find controls in the DOM and update them or grab values as mentioned earlier. If you’ve ever worked with desktop development frameworks then you’re more than likely used to this type of functionality and I’m willing to bet that you can’t live without it. Data binding is addictive once you start using it.

Although I’ve been a big fan of jQuery and vanilla JavaScript over the years, as I wrote more and more front-end applications I realized that a lot of unnecessary code was being written that could be eliminated by using a data-oriented approach. jQuery and vanilla JavaScript still have their place in some applications (not every application has robust data binding needs after all), but using those options to build data-oriented applications isn’t a good use of their functionality in my opinion - and not a good use of your time either. Those options are great when you require low-level DOM access but not as great when an application has a lot of Create, Read, Update, Delete (CRUD) operations going on, a lot of user interaction, and more. When you understand what a data-oriented application really is and why it’s important, then using that technique makes more sense for CRUD applications as well as many other application types.

## Conclusion

With SPA frameworks/libraries like Angular/React/Vue.js and others the data binding engine is built-in so going with a data-oriented approach is fairly straightforward. The challenge in the JavaScript world is that there isn’t simply one “best” data binding option to choose. Many different script libraries/frameworks continue to appear on the scene with their own set of pros and cons. The next challenge is [choosing the framework/library](https://blog.codewithdan.com/choosing-the-right-javascript-library-framework-for-your-application/) that works best for you - just make sure it has data-oriented programming support built-in!</content:encoded></item><item><title>4 kubectl Commands to Help Debug Pod Issues in Kubernetes</title><link>https://blog.codewithdan.com/4-kubectl-commands-to-help-debug-pod-issues-in-kubernetes/</link><guid isPermaLink="true">https://blog.codewithdan.com/4-kubectl-commands-to-help-debug-pod-issues-in-kubernetes/</guid><pubDate>Sun, 14 Apr 2019 00:00:00 GMT</pubDate><content:encoded>&lt;figure&gt;

![mac command by Hannah Joshua](/images/blog/4-kubectl-commands-to-help-debug-pod-issues-in-kubernetes/46T6nVjRc2w.webp)

&lt;figcaption&gt;

mac command by Hannah Joshua

&lt;/figcaption&gt;

&lt;/figure&gt;

![](/images/blog/4-kubectl-commands-to-help-debug-pod-issues-in-kubernetes/2019-03-10_16-06-42.webp)If you&apos;ve worked with containers a lot you&apos;re probably good at commands like **docker logs** and **docker exec** to retrieve information about containers that may be having problems. One of the challenges that comes up as people move to Kubernetes is understanding how to get similar details about Pods and any containers running within them. I&apos;ve had several people ask me about this recently in my instructor-led [Kubernetes course](https://codewithdan.com/products/docker-kubernetes) as well as online with my [Docker for Web Developers](https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents) course (which has a module on Kubernetes) so I decided to post a few of the initial commands you can use to get started resolving Pod and container issues.

## Checking Pod Logs with kubectl logs

The first thing I normally do if a Pod is having problems is check the logs for any errors. This is very similar to **docker logs**.

```bash
kubectl logs [pod-name]
```

If the Pod contains more than one container you can use the **\-c** switch to define a specific container. Use the container name defined in the Pod or Deployment YAML.

```bash
kubectl logs [pod-name] -c [container-name]
```

Note: Run **kubectl get pod \[pod-name\] -o yaml** or **kubectl get deployment \[deployment-name\] -o yaml** if you&apos;re not sure about the name of the container. The **\-o yaml** switch is useful for getting additional information about the Pod by the way - more information on that technique will be provided a little later.

To get logs for all containers in a Pod (if you have more than 1) you can run the following:

```bash
kubectl logs [pod-name] --all-containers=true
```

If you want to get logs for a previously running Pod add the **\-p** flag:

```bash
kubectl logs -p [pod-name]
```

Finally, to stream the logs for a Pod use the **\-f** flag:

```bash
kubectl logs -f [pod-name]
```

[kubectl logs documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs)

## Describing a Pod with kubectl describe

You can run the **kubectl describe** command to see information about the Pod as well as events that have run (look at the bottom of the output for the events). This is really helpful to see if the image for a container was pulled correctly, if the container started in the Pod, any Pod reschedule events, and much more.

```bash
kubectl describe pod [pod-name]
```

In some cases describe events may lead to the discovery that the troubled Pod has been rescheduled frequently by Kubernetes. It&apos;s great that this happens (when setup properly with a Deployment for example), but it&apos;s also good to get to the bottom of &quot;why&quot; a Pod is being rescheduled to determine if there&apos;s a bug in the code that&apos;s running, a memory leak, or another issue.

[kubectl describe documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe)

## Viewing the Pod YAML with -o yaml

Finally, you can run **kubectl get** on a troubled Pod but display the YAML (or JSON) instead of just the basic Pod information. In many scenarios this may yield some useful information.

```bash
kubectl get pods [pod-name] -o yaml
```

You can do the same thing for a specific Deployment as well:

```bash
kubectl get deployment [deployment-name] -o yaml
```

[kubectl get documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get)

## Shelling into a Pod Container with kubectl exec

In some cases you may need to get into a Pod&apos;s container to discover what is wrong. With Docker you would use the **docker exec** command. Kubernetes is similar:

```bash
kubectl exec [pod-name] -it -- sh
```

[kubectl exec documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec)

Running one of these commands will help provide some initial information about what may be going on with a troubled Pod/Container. There are of course many other techniques that can be used as well to diagnose Pod issues (checking the UI Dashboard, monitoring, viewing stats about containers, and much more), but these should help get you started if you&apos;re new to Kubernetes.

[Discuss on Twitter](https://twitter.com/search?q=https%3A%2F%2Fblog.codewithdan.com%2F4-kubectl-commands-to-help-debug-pod-issues-in-kubernetes%2F&amp;src=typd)</content:encoded></item><item><title>Using the Iterator Pattern in JavaScript</title><link>https://blog.codewithdan.com/using-the-iterator-pattern-in-javascript/</link><guid isPermaLink="true">https://blog.codewithdan.com/using-the-iterator-pattern-in-javascript/</guid><pubDate>Sat, 06 Apr 2019 00:00:00 GMT</pubDate><content:encoded>&lt;figure&gt;

![Roller coaster in a park by Priscilla Du Preez](/images/blog/using-the-iterator-pattern-in-javascript/FOsina4f7qM.webp)

&lt;figcaption&gt;

Roller coaster in a park by Priscilla Du Preez

&lt;/figcaption&gt;

&lt;/figure&gt;

I recently had to parse some markdown using the [marked npm package](https://www.npmjs.com/package/marked) and convert it into JSON objects for a project I&apos;m working on. When I parsed the markdown I&apos;d get back an array of tokens that would look something like the following:

```json
[
  { &quot;type&quot;: &quot;heading&quot;, &quot;depth&quot;: 1, &quot;text&quot;: &quot;Course: React Core Concepts&quot; },
  { &quot;type&quot;: &quot;paragraph&quot;, &quot;text&quot;: &quot;In these workshop labs you&apos;ll learn about React core concepts and see how it can be used to build Single Page Applications (SPAs).&quot; },
  { &quot;type&quot;: &quot;space&quot; },
  { &quot;type&quot;: &quot;paragraph&quot;, &quot;text&quot;: &quot;Topics covered include:&quot; },
  { &quot;type&quot;: &quot;space&quot; },
  { &quot;type&quot;: &quot;list_start&quot;, &quot;ordered&quot;: false, &quot;start&quot;: &quot;&quot;, &quot;loose&quot;: false },
  { &quot;type&quot;: &quot;list_item_start&quot;, &quot;task&quot;: false, &quot;loose&quot;: false },
  { &quot;type&quot;: &quot;text&quot;, &quot;text&quot;: &quot;Creating and using components&quot; },
  { &quot;type&quot;: &quot;list_item_end&quot; },
  { &quot;type&quot;: &quot;list_item_start&quot;, &quot;task&quot;: false, &quot;loose&quot;: false },
  { &quot;type&quot;: &quot;text&quot;, &quot;text&quot;: &quot;Working with Data Binding&quot; },
  { &quot;type&quot;: &quot;list_item_end&quot; },
  { &quot;type&quot;: &quot;list_item_start&quot;, &quot;task&quot;: false, &quot;loose&quot;: false },
  { &quot;type&quot;: &quot;text&quot;, &quot;text&quot;: &quot;Using Axios to retrieve data from a server&quot; },
  { &quot;type&quot;: &quot;list_item_end&quot; },
  { &quot;type&quot;: &quot;list_item_start&quot;, &quot;task&quot;: false, &quot;loose&quot;: false },
  { &quot;type&quot;: &quot;text&quot;, &quot;text&quot;: &quot;Routing&quot; },
  { &quot;type&quot;: &quot;space&quot; },
  { &quot;type&quot;: &quot;list_item_end&quot; },
  { &quot;type&quot;: &quot;list_end&quot; }
]
```

I started out the &quot;normal&quot; way by doing a **for...of** loop to iterate through the tokens in the array. This worked, but tracking the start and end of a token meant adding extra variables which ultimately complicated the code. For example, how do you know if you&apos;re in a list? You track it with an **inList** variable or something similar. That works, but it could definitely be better especially since lists were only one of several types of objects I needed to track.

As the code progressed, I realized that sometimes I needed the **index** value as I was looping through the tokens. So, I changed the code to loop through the tokens using a standard **for** loop. While that worked, I still had the problem of tracking where I was in the overall object that was processed (such as a list of items) and it wasn&apos;t as simple as I wanted when I needed to move to the next token manually.

For example, to get all of the items in the list, I had to set an **inList** type of variable when the **list\_start** token was encountered. Then when the looping continued I had to look for **list\_item\_start**, and then the **text** token. Since I couldn&apos;t access **list\_start** and then easily move down 2 spots to the text I wanted, it was more challenging than it should have been. While I made it work, there were several other scenarios where I ran into this challenge as well.

Although I was able to get my first iteration of the code working fairly quickly, it felt really complex and didn&apos;t sit well with me at all. One of those moments where you realize that while the code works, you&apos;ll never be able to maintain it in the future without remembering all of the little variables that were added and how they were used. If I can&apos;t look at code in the future and get a quick feel for what it&apos;s doing without a lot of analysis, then the code is probably more complex than it needs to be.

I tweeted the following about the current state of the code since it was amusing how it started out so simple and then became so complex:

https://twitter.com/DanWahlin/status/1113641653347045377

I started the process of refactoring the code and came up with some good optimizations, but tracking &quot;Where the hell am I...I&apos;m lost!&quot; in the token array was still challenging. After thinking about it more, considering other options such as map/filter, I decided to bite the bullet and refactor the code yet again to use the **[iterator pattern](https://en.wikipedia.org/wiki/Iterator_pattern)** to make it easy to know where I was in the process.

&gt; The iterator pattern is a design pattern in which an iterator is used to traverse a container and access the container&apos;s elements.
&gt; 
&gt; [https://en.wikipedia.org/wiki/Iterator\_pattern](https://en.wikipedia.org/wiki/Iterator_pattern)

I realized early on that using this type of pattern might be easier (I used it a lot in other languages/frameworks), but I was too far down the rabbit hole to go back up. After reaching the bottom of the hole I realized it would be worth the time to convert the code.

I added the following code into the class I was working with to enable doing custom iterations over the tokens:

```typescript
tokenIterator(tokens: MarkdownToken[]) {
    let index = -1;
    return {
        next: () =&gt; {
            index++;
            if (index &lt; tokens.length) {
                return { token: tokens[index], index, 
                         done: false };
            } else {
                return { done: true, index };
            }
        },
        peek: () =&gt; {
            if (index + 1 &lt; tokens.length) {
                return { token: tokens[index + 1], 
                         index: index + 1, done: false }
            }
            else {
                return { done: true, index };
            }
        }
    }
}
```

This enabled me to easily move from to token to token without relying on some type of **for loop**. It also added the ability to &quot;peek&quot; at the next item without consuming it (more on this in a moment). If you&apos;ve worked with Java, C#, or other languages you&apos;ll recognize this type of pattern since it&apos;s very common in many languages and one of the [GOF patterns](https://en.wikipedia.org/wiki/Design_Patterns).

By adding the token iterator I could now do something like the following to iterate through the tokens.

```typescript
this.currentResult = this.iterator.next();
while (!this.currentResult.done) {
    const token = this.currentResult.token;
    if (token.type === &apos;heading&apos;) {
        switch (token.depth) {
           case 1: // course
               ...
           case 2: // lab
               ...
           case 3: // exercise
               ...
           case 4: // step
               ...
        }
    }
    this.currentResult = this.iterator.next();
}
```

This meant that any time I needed to move to the next item I could simply call **this.iterator.next()**. That made working with nested child object scenarios MUCH easier overall. For example, working with a list meant iterating over the tokens until I found the **list\_item\_end** token. No additional state tracking was needed to know where I was in the tokens.

```typescript
let text = &apos;&apos;;
let listToken = token;
while (listToken.type !== &apos;list_end&apos;) {
    if (listToken.type === &apos;text&apos;) {
        text += this.convertFromMarkdown(listToken.text.trim());
    }
    listToken = this.iterator.next().token;
}
```

By using the **peek()** function I could easily look at the next token without actually moving to it as well:

```typescript
parseChildren(currentNode) {
    this.currentResult = this.iterator.next();
    while (!this.currentResult.done) {
        // do work here

        // See if we should move next or not since we don&apos;t
        // want to move to a &apos;header&apos; if the depth is &lt; 5
        const peekResult = this.iterator.peek();
        if (peekResult.done || 
             (peekResult.token.type === &apos;heading&apos; &amp;&amp; 
              peekResult.token.depth !== 5)) {
            break;
        }
        this.currentResult = this.iterator.next();
    }
}
```

There are many more things that can be done to the iterator code to enhance it (such as adding support for custom predicates, &quot;iterate until&quot; type logic, etc.), but it&apos;s easy to get started using and works well in the right situation.

While the iterator pattern isn&apos;t new by any means and has been used in JavaScript for a long time, I hope the general thought process described here might save someone from going down the wrong rabbit hole and creating a vicious code monster. :-)

[Discuss on Twitte](https://twitter.com/search?src=typd&amp;q=https%3A%2F%2Fblog.codewithdan.com%2Fusing-the-iterator-pattern-in-javascript%2F)[r](https://twitter.com/search?q=https:/blog.codewithdan.com/using-the-iterator-pattern-in-javascript/)</content:encoded></item><item><title>Docker Volumes and &quot;print working directory&quot; Command Syntax</title><link>https://blog.codewithdan.com/docker-volumes-and-print-working-directory-pwd/</link><guid isPermaLink="true">https://blog.codewithdan.com/docker-volumes-and-print-working-directory-pwd/</guid><pubDate>Fri, 29 Mar 2019 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/docker-volumes-and-print-working-directory-pwd/docker_logo.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/10/docker_logo.png)I often use Docker to run an application in a container as I&apos;m writing and testing code. That involves creating a volume that points the container to a path on my machine. The challenge with setting up volumes is that the &quot;print working directory&quot; command that is often used to easily identify the location of your source code on the host machine is different depending on what command terminal you&apos;re using (especially on Windows).

Here&apos;s a quick summary that shows the syntax for &quot;print working directory&quot; in different command terminals when using volumes (if you&apos;re new to volumes you can [read more about them here](https://docs.docker.com/engine/admin/volumes/volumes/)). An [nginx container](https://hub.docker.com/_/nginx/) path is shown to provide a simple example of a volume pointing to the current working directory on the host machine.

## Windows Command Window Syntax

You can use the %cd% syntax to represent the current working directory:

```bash
 -v %cd%:/usr/share/nginx/html
```

## PowerShell Command Window Syntax

You can use the ${PWD} syntax to represent the current working directory:

```bash
 -v ${PWD}:/usr/share/nginx/html
```

## Windows Subsystem for Linux (WSL) with a Windows Directory Syntax

If you&apos;re referencing a Windows directory from WSL2 you can use the following syntax.  
**NOTE:** It&apos;s recommended you reference a directory that is within WSL rather than within Windows for performance reasons.  

```bash
-v /mnt/c/username/some-windows-directory:/usr/share/nginx/html
```

## Git Bash on Windows Syntax

You can use the /$(pwd) syntax to represent the current working directory:

```bash
-v /$(pwd):/usr/share/nginx/html
```

## Mac/Linux Syntax

You can use $(pwd) syntax to represent the current working directory:

```bash
-v $(pwd):/usr/share/nginx/html
```

There are additional variations of the &quot;print working directory&quot; syntax shown above. If you prefer to use a different one please leave a comment with the information - share the knowledge!

[Discuss on Twitter](https://twitter.com/search?src=typd&amp;q=https%3A%2F%2Fblog.codewithdan.com%2Fdocker-volumes-and-print-working-directory-pwd%2F)</content:encoded></item><item><title>Simplifying Front-End State Management with Observable Store</title><link>https://blog.codewithdan.com/simplifying-front-end-state-management-with-observable-store/</link><guid isPermaLink="true">https://blog.codewithdan.com/simplifying-front-end-state-management-with-observable-store/</guid><pubDate>Sat, 16 Mar 2019 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/simplifying-front-end-state-management-with-observable-store/stones-1280-1024x576.webp)

I admit it - I think the use of some front-end JavaScript state management patterns has gotten out of control. When you&apos;re spending a significant amount of time writing code (and often a lot of it) to handle application state or relying on a scaffolding tool that generates 100s or even 1000s of lines of code then it&apos;s time to take a step back and ask, &quot;Do I really need all of this?&quot;. While you&apos;re at it you might also ask, &quot;What can I do to simplify my code and bring some sanity back to it?&quot;.

Rather than ranting about my views on keeping software simple, choosing the right tool for the right job, the importance of considering maintenance costs, challenges that more complex patterns present when contractors or new hires are involved, and more, let me get right to the point:

&gt; **I think front-end state management needs a big dose of simplicity!**

After hearing from many people and working on projects myself, I was frustrated with some of the state management options out there and decided to experiment with a simple solution that eventually became a project I call **Observable Store**. It turns out several people had a similar idea which was refreshing to see (there are a few similarly named projects on Github and npm).

**Note:** If you want my more opinionated view on state management complexity you can jump down to **[My Two Cents on State Management Complexity](#myTwoCents)** in this post.

## The Birth of Observable Store

One of the perks of my job is that I get to work with a lot of developers at companies around the world. This comes in the form of [architecture work, training, mentoring](https://codewithdan.com/products/productType/training), talking with people at conferences, meetups, [webinars](https://www.apexsystems.com/Events/Pages/CompletedEvents.aspx?filter=wahlin), and more. I&apos;ve had many conversations about various state management options and listened to stories about what has worked and what hasn&apos;t. One common comment I&apos;ve continually heard is, &quot;I wish there was a more simple way to handle state management in my front-end apps&quot;.

As I&apos;ve talked one on one with other architects and developers, helped people with their projects, and worked on my own, I&apos;ve often asked, &quot;What is it that you really want in a state management solution?&quot;. Here are main goals that came out of asking that question:

1. Single source of truth
2. State is read-only/immutable
3. Provide state change notifications to any subscriber
4. Track state change history
5. Minimal amount of code required
6. Works with any front-end library/framework (Angular, React, Vue.js, or anything else that supports JavaScript)

I started playing around with adding these general goals/concepts into a simple library about 1 1/2 years ago and ultimately developed something I now call [Observable Store](https://www.npmjs.com/package/@codewithdan/observable-store). I use it for any front-end projects I work on (React, Vue.js, Angular or others) that need a state management solution. Observable Store satisfies the goals mentioned above but does so in an extremely simple way. The code for the library is only around 220 lines total since the &quot;power&quot; it provides comes from using [RxJS Subjects and Observables](https://rxjs.dev/). In fact, Observable Store only has 1 dependency - RxJS.

So why consider Observable Store? If you&apos;re interested in achieving any of the goals shown earlier then Observable Store provides an extremely simple way to achieve those goals. You instantly get a single store that can be referenced throughout your app, state that is immutable (good for change detection in libraries/frameworks), state history tracking, and a way to subscribe to store changes. Plus, Observable Store can be used with any JavaScript library or framework. You&apos;re not locked into anything - except using JavaScript.

So how do you get started with Observable Store? Here&apos;s a quick overview.

## Getting Started with Observable Store

To get started with observable store you simply **npm install** it in your project (Angular, React, Vue.js, or any JavaScript project):

```bash
npm install @codewithdan/observable-store
```

From there you create a service class that extends **ObservableStore**. If you&apos;re working with TypeScript you can use a generic to pass the shape of the data that gets stored in the store (pass a class or interface). TypeScript isn&apos;t required though and it [works fine with ES2015](https://github.com/DanWahlin/Observable-Store#using-observable-store-with-react) (or even ES5) as well.

```typescript
// Optionally define what gets stored in the observable store
export interface StoreState {
    customers: Customer[];
    selectedCustomer: Customer;
    orders: Order[];
    selectedOrder: Order;
}

// Extend ObservableStore and optionally pass the store state
// using TypeScript generics (TypeScript isn&apos;t required though)
export class CustomersService extends ObservableStore&lt;StoreState&gt; {
  constructor() {
    // Pass initial store state (if desired). Want to track all
    // changes to the store? Set trackStateHistory to true.
    super(initialStoreState, { trackStateHistory: true });
  }
}
```

Now add any functions to your class to retrieve data from a data store and work with the data. Call **setState()** to set the state in the store or **getState()** to retrieve state from the store. When setting the state you can pass an action name which is useful when tracking [state changes and state history](https://github.com/DanWahlin/Observable-Store#using-observable-store-with-react).

```typescript
import { Observable, of } from &apos;rxjs&apos;;
import { ObservableStore } from &apos;@codewithdan/observable-store&apos;;

export class CustomersService extends ObservableStore&lt;StoreState&gt; {
    constructor() { 
        const initialState = {
            customers: [],
            selectedCustomer: null,
            orders: Order[],
            selectedOrder: null
        }
        super(initialState, { trackStateHistory: true });
    }
 
    get() {
        // Get state from store
        const customers = this.getState().customers;
        if (customers) {
            // Return RxJS Observable
            return of(customers);
        }
        else {
            // call server and get data
            // assume async call here that returns Observable
            return asyncData;
        }
    }
 
    add(customer: Customer) {
        // Get state from store
        let state = this.getState();
        state.customers.push(customer);
        // Set state in store
        this.setState({ customers: state.customers }, 
                      &apos;add_customer&apos;);
    }
 
    remove() {
        // Get state from store
        let state = this.getState();
        state.customers.splice(state.customers.length - 1, 1);
        // Set state in store
        this.setState({ customers: state.customers } 
                      &apos;remove_customer&apos;);
    }
 
}
```

As the store state changes, any part of the application can be notified by subscribing to the store&apos;s **stateChanged** event. In this example changes made to the store by CustomersService will be received which provides a nice way to listen to a &quot;slice&quot; of the overall store quite easily.

```typescript
// Subscribe to the changes made to the store by 
// CustomersService. Note that you&apos;ll want to unsubscribe
// when done.
this.customersService.stateChanged.subscribe(state =&gt; {
  this.customers = state.customers;
});
```

Note that because the store state is immutable, a **stateChanged** subscriber will always get a &quot;fresh&quot; object back which works well with detecting state/data changes across libraries/frameworks. Because RxJS observables are used behind the scenes you can use all of the great operators that RxJS provides as well.

If you need to listen to all changes made to the store you can use the **[globalStateChanged](https://github.com/DanWahlin/Observable-Store#store-api)** event (thanks to [Mickey Puri](https://github.com/mickeypuri) for this contribution):

```typescript
// Subscribe to all store changes, not just the ones triggered
// by CustomersService
this.customersService.globalStateChanged.subscribe(state =&gt; {
  // access anything currently in the store here
});
```

You can even listen to a specific slice of the store (customers and orders for example) by supplying a **[stateSliceSelector](https://github.com/mickeypuri)** function.

To handle orders, you can create another class that extends **ObservableStore** and add the order related functionality in it. By breaking the functionality out into separate classes you can achieve single responsibility (the &quot;S&quot; in SOLID) while still having only one store backing the entire application.

```typescript
// Extend ObservableStore
export class OrdersService extends ObservableStore&lt;StoreState&gt; {
  constructor() {
    // Define that we want to track changes that this object
    // makes to the store
    super({ trackStateHistory: true });
  }
}
```

Both **CustomersService** and **OrdersService** share the same store (as do all classes that extend ObservableStore in your application).

The Observable Store [API](https://github.com/DanWahlin/Observable-Store#store-api) and [settings](https://github.com/DanWahlin/Observable-Store#store-api) are simple to learn and you can get it up and running in no time at all. You can find examples of using it with Angular and React apps (I&apos;m hoping to add a Vue.js example in the near future) in the [Github repo](https://github.com/DanWahlin/Observable-Store).

Is Observable Store the answer to keeping state management simple in front-end applications? It&apos;s one potential solution that has worked well for my company and several other companies/developers who are using it. I&apos;ve been using it privately for over a year now and really enjoy the simplicity it brings to the table. If you try it out or have questions about it feel free to leave a comment below or in the [Github repo](https://github.com/DanWahlin/Observable-Store/issues).

## My Two Cents on State Management Complexity

I mentioned toward the beginning of this post that I didn&apos;t want to get into &quot;my&quot; opinion on state management since I prefer to focus on potential solutions rather than focusing on problems. I&apos;m just one guy after all that has an opinion that some may agree with and some definitely will disagree with. Having said that, many people ask my opinion about this particular subject so here&apos;s a quick summary of where I stand.

I think we often get caught up in the &quot;group think&quot; mode of developing software (something that I&apos;m guilty of as well on occasion) and that results in great things and a lot of not so great things spreading like fire across the developer community. Because a concept or pattern is &quot;popular&quot; or &quot;everyone is using it&quot; we gravitate to it without digging in and considering if it&apos;s the best way to go for our specific application scenario, if it&apos;s actually necessary, and the pros/cons it brings to the team or project. It feels like a &quot;sheep off the cliff&quot; mentality in some cases. I recently [came across a post](https://medium.com/@amcdnl/the-future-of-javascript-state-management-is-less-state-management-ba1d97b99308) that echos a lot of my thoughts on the &quot;state&quot; of front-end state management complexity.

As I&apos;ve worked with various companies around the world over the years, talked with developers at conferences, and interacted with people online, one of the main &quot;gripes&quot; I keep hearing can be summed up as, &quot;Front-end state management complexity is killing us!&quot;. I also hear, &quot;I can&apos;t believe how much code is added to our application to follow pattern X&quot;, or &quot;We&apos;re using technology X and Y at work across teams and can&apos;t share our state management code between them!&quot;.

In all fairness, some of the patterns that are available like Redux provide a lot of value. For example, consistency for a team, insight into the flow of data, better debugging in some cases, and more. **I don&apos;t think there&apos;s any dispute there so I want to make that clear**. Many people are using some of the different font-end state management patterns very successfully especially with larger teams and a lot of moving parts. So what&apos;s the problem?

For starters, if everyone on a team doesn&apos;t understand a given pattern well, then they&apos;re copying and pasting code or using some type of scaffolding tool without really understanding what&apos;s going on and why they&apos;re doing it. As the application&apos;s complexity grows they feel more and more lost. This often applies to projects that bring in contractors, new hires, or developers that may not work solely in the front-end world. But, it applies to pure front-end developers too I&apos;ve found.

An argument can be made that anyone using a pattern without really understanding it needs to take time to learn the pattern better, and I think that&apos;s a valid point. But, when someone didn&apos;t choose the pattern used in a project and deadlines are looming, they don&apos;t have much of a choice but to push through it even if they don&apos;t fully understand what&apos;s going on. Plus, I think there&apos;s also an argument to be made that if a pattern requires that much time and code to learn then maybe it&apos;s worth considering if it&apos;s the best way to go in the first place? Keep in mind I&apos;m only talking about state management here. We still have the rest of the application to worry about as well.

In addition to understanding a pattern well, can you use the same code between different front-end JavaScript technologies and does the code look the same? For example, React has Redux, Angular has NgRx (Redux + RxJS), Vue.js has Vuex, and so on. That may not be an issue for you, but it is for several companies I work with because they don&apos;t want to maintain different implementations of the same overall pattern.

For the question, &quot;Can you use the same code between different front-end JavaScript technologies?&quot;, I&apos;m going to say the answer to that is a definite &quot;No!&quot; - sharing state management code often isn&apos;t an option in the majority of scenarios I&apos;ve seen. The pattern used may be similar in some cases, but the implementations are radically different between libraries/frameworks. If your company isn&apos;t using just one main library/framework for front-end projects that can present a challenge when you&apos;re trying to make projects as consistent as possible (while also letting developers use the technology they prefer).

There are certainly additional challenges that I can point out with more complex state management options (maintenance challenges, the sheer amount of code added, bundle sizes, team knowledge, etc.) but that&apos;ll do for now. I think it really boils down to using the right tool for the right job and realizing that not everything is a nail that requires a complex hammer.

Isn&apos;t it worth considering if the state management pattern itself (whatever it is) may actually be overly complex for a given scenario and that viable alternatives may exist? One size NEVER fits all and there are many applications out there using a complex state management pattern that simply don&apos;t need it at all. I&apos;ve seen it myself many times at companies. For example, an application may perform standard CRUD (Create, Read, Update, Delete) operations directly to a back-end service. Once an operation is complete it&apos;s done. Aside from showing a message to the user there&apos;s nothing else to do from a state perspective. In this simple scenario and many others there&apos;s often no need for a complex state management solution - it would only add unnecessary complexity. Which brings me to 3 of my favorite words: &quot;keep it simple&quot;.

I truly admire architects and developers that have the wisdom, knowledge, expertise, and ability to keep their application code as simple as possible while still meeting the needs of users. Building good software is hard, and the ability to keep code simple is arguably just as hard. It&apos;s an art and skill that has to be developed over time and in some cases I feel like that skill has been lost. Keeping things as simple as possible yields many positive results in the end - especially when it comes to long-term maintenance.

This is definitely one of those highly subjective topics I realize, but let me know your \*constructive\* thoughts on it in the comments. Every situation is different so I&apos;m always interested in hearing different opinions. You can reach out to me on [Twitter](https://twitter.com/danwahlin) as well.

Cross posted to [dev.to](https://dev.to/danwahlin/simplifying-front-end-state-management-with-observable-store-1jjp).

[Discuss on Twitter](https://twitter.com/search?q=https%3A%2F%2Fblog.codewithdan.com%2Fsimplifying-front-end-state-management-with-observable-store%2F&amp;src=typd)</content:encoded></item><item><title>CloudSkills Podcast Interview: Docker, Kubernetes, and Microservices</title><link>https://blog.codewithdan.com/cloudskills-podcast-interview-docker-kubernetes-and-microservices/</link><guid isPermaLink="true">https://blog.codewithdan.com/cloudskills-podcast-interview-docker-kubernetes-and-microservices/</guid><pubDate>Sat, 09 Mar 2019 00:00:00 GMT</pubDate><content:encoded>I recently chatted with my friend [Mike Pfieffer](https://twitter.com/mike_pfeiffer) who runs the [CloudSkills.fm](https://cloudskills.fm/009) podcast about Docker, Kubernetes, and Microservices. Mike works a lot in the DevOps space and I work in the developer space (and do some DevOps as well) so it was a fun discussion about challenges that come up in both worlds.

We discussed the benefits of containers, the role of Kubernetes, and how both can play an important role when working with Microservices.

Listen to the podcast here:

[![](/images/blog/cloudskills-podcast-interview-docker-kubernetes-and-microservices/2019-03-09_10-19-07.webp)](https://cloudskills.fm/009)

- [CloudSkills.fm Website](https://cloudskills.fm/009)[](http://realtalkjs.com﻿)

  

  

[](https://www.stitcher.com/podcast/realtalk-javascript?refid=stpr)

[](http://realtalkjs.com)

## Video of the Podcast

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/ue3idTHaQXs&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;/iframe&gt;</content:encoded></item><item><title>New Pluralsight Course: Angular Architecture and Best Practices</title><link>https://blog.codewithdan.com/new-pluralsight-course-angular-architecture-and-best-practices/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-pluralsight-course-angular-architecture-and-best-practices/</guid><pubDate>Wed, 16 Jan 2019 00:00:00 GMT</pubDate><content:encoded>[![Angular Architecture](/images/blog/new-pluralsight-course-angular-architecture-and-best-practices/AngularArchitecture-850x506.webp)](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/angular-architecture-best-practices)

I’m excited to announce the release of my next course on Pluralsight titled [Angular Architecture and Best Practices](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/angular-architecture-best-practices)! The goal of this course is to provide you with solid, proven guidance to plan out your Angular application architecture, address various challenges that typically come up, and ultimately create a more maintainable and flexible application.

Here are a few questions this course will help you answer:

- Is there one &quot;right&quot; way to architect and build an Angular application? Short answer - NO!
- What are some key concepts I should consider when planning my application architecture?
- Is there any type of planning template I can use to help my team get started?
- Is it important to think through the organization of  modules or should I put everything in the root module?
- What&apos;s the difference between shared and core modules?
- How do I structure components? What if I have deeply nested components?
- How should I organize my application features?
- How do I communicate between components? What if I need to communicate across multiple levels of an application?
- What best practices should I be following throughout my application?
- Do I need a state management solution? What are some of the available options and how do they compare?
- What is an observable service and how would I create and use one?
- Can reference types and value types have different affects on my application behavior?
- How do I share code in my application? What if I need to share code between multiple applications?
- What&apos;s an RxJS subject? Is there more than one type of subject?
- How can I use forkJoin, concatMap, switchMap, mergeMap, and other RxJS operators to make more efficient calls to the server?
- Where should I consider using HTTP Interceptors in my app?
- Is it OK to call component functions from a template? Are there alternatives I should consider?
- What different techniques can be used to unsubscribe from observables?
- What are some key security considerations I should be thinking about?
- And much more…

Here’s a summary of the course…

# [Angular Architecture and Best Practices](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/angular-architecture-best-practices)

There are a lot of questions out there about the core concepts of Angular, such as whether or not you&apos;re following established best practices, how easy will it be to maintain and refactor the application in the future, how do I structure my features, modules, components, services, and more? Whether you&apos;re starting a new application from scratch or updating an application, what application architecture should be used?

In the [Angular Architecture and Best Practices](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/angular-architecture-best-practices) course you&apos;ll learn about different architectural concepts, best practices, and techniques that can be used to solve some of the more challenging tasks that come up during the planning and development process. You&apos;ll learn about a planning template that can be used to provide a simple and efficient way to get started. Discover different component communication techniques, walk through state management and code organization options. Finally, you&apos;ll explore general best practices, performance considerations, and much, much more.

When you&apos;re finished with this course, you&apos;ll have the skills and knowledge  needed to think through the process of building a solid application that is easy to refactor and maintain.

# Course Modules

1. Course Overview
2. Introduction
    - Introduction
    - Prerequisites to Maximize Learning
    - Key Concepts and Learning Goals
    - Sample Application and Software Requirements
    - Course Overview
3. Planning the Application Architecture
    - Introduction
    - Architecture Considerations
    - Architecture Planning Template
    - Architecture Planning Template Example
    - The Angular Style Guide
    - Other Considerations
    - Summary
4. Organizing Features and Modules
    - Introduction
    - Organizing Features
    - Feature Modules
    - Core and Shared Modules
    - Core and Shared in Action
    - Creating a Custom Library
    - Consuming a Custom Library
    - Putting All the Modules Together
    - Summary
5. Structuring Components
    - Introduction
    - Container and Presentation Components
    - Container and Presentation Components in Action
    - Passing State with Input and Output Properties
    - Input and Output Properties in Action
    - Change Detection Strategies
    - Reference vs. Value Types
    - Cloning Techniques
    - Cloning in Action
    - Cloning with Immutable.js
    - Component Inheritance
    - Component Inheritance in Action
    - Summary
6. Component Communication
    - Introduction
    - Component Communication
    - Understanding RxJS Subjects
    - RxJS Subjects in Action - Part 1
    - RxJS Subjects in Action - Part 2
    - Creating an Event Bus Service
    - Using an Event Bus Service
    - Creating an Observable Service
    - Using an Observable Service
    - Unsubscribing from Observables
    - Summary
7. State Management
    - Introduction
    - The Need for State Management
    - State Management Options
    - Angular Services
    - NgRx
    - NgRx in Action
    - ngrx-data
    - ngrx-data in Action
    - Observable Store
    - Observable Store in Action
    - State Management Review
    - Summary
8. Additional Considerations
    - Introduction
    - Functions vs. Pipes
    - Functions and Pipes in Action
    - Using a Memo Decorator
    - HttpClient and RxJS Operators
    - Key Security Considerations
    - HTTP Interceptors
    - Summary
9. Course Summary

There&apos;s a lot of thought and planning that goes into any application. While there are many opinions out there on how to architect an app, I hope [the course](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/angular-architecture-best-practices) provides you with additional insight into the process.

[Discuss on Twitter](https://twitter.com/search?q=https%3A%2F%2Fblog.codewithdan.com%2Fnew-pluralsight-course-angular-architecture-and-best-practices%2F&amp;src=typd)</content:encoded></item><item><title>Free Interactive Coding Course: Build Your First Angular App</title><link>https://blog.codewithdan.com/free-course-build-your-first-angular-app/</link><guid isPermaLink="true">https://blog.codewithdan.com/free-course-build-your-first-angular-app/</guid><pubDate>Tue, 16 Oct 2018 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/free-course-build-your-first-angular-app/2018-10-17_00-55-32.webp)](https://scrimba.com/g/gyourfirstangularapp)

About a year ago I was browsing the web and came across a site called [Scrimba.com](https://scrimba.com/). It provided a unique way to learn about web technologies through a live code editor combined with audio that syncs with the code - something you have to actually try out to realize the full potential. Since I do a lot of training for companies the Scrimba tool really caught my eye.

I was so impressed with the features Scrimba provided that I decided to contact the creators (Per Harald Borgen and Sindre Aarsaether) to let them know that I thought the tool they had built was great and to see if they had plans to license it for various training scenarios. It turns out that Per and Sindre had heard about some of my video training courses on sites like [Pluralsight.com](https://www.pluralsight.com/search?q=dan%20wahlin&amp;categories=course) and [Udemy.com](https://www.udemy.com/user/danwahlin/) and asked if I&apos;d be interested in creating a course for Scrimba. After several video chats over a few months we finalized the details and the [Build Your First Angular App](https://scrimba.com/g/gyourfirstangularapp) course became a reality.

I originally recorded the course back in April of 2018 but the concepts covered apply to the latest version of Angular as well. The course provides a great way to learn Angular while getting hands-on experience with TypeScript, components, templates and data binding, services, routing and more using the live Scrimba code editor and its unique ability to sync with audio clips that describe the code.

You can pause the course at any point, experiment with the code (it&apos;s real code - not videos of code), and then resume the course whenever you want.

The course consists of 33 &quot;screencasts&quot; (although there&apos;s no video as you&apos;ll see if you take the course) that walk you through everything you need to know to get started building Angular apps from start to finish. Here&apos;s the official course agenda:

![](/images/blog/free-course-build-your-first-angular-app/2018-10-17_00-28-30.webp)

  
You can [watch the course for free!](https://scrimba.com/g/gyourfirstangularapp) I hope you enjoy it!</content:encoded></item><item><title>Real Talk JavaScript Podcast: End to End Testing with Cypress.io</title><link>https://blog.codewithdan.com/real-talk-js-podcast-end-to-end-testing-with-cypress-io/</link><guid isPermaLink="true">https://blog.codewithdan.com/real-talk-js-podcast-end-to-end-testing-with-cypress-io/</guid><pubDate>Tue, 16 Oct 2018 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/real-talk-js-podcast-end-to-end-testing-with-cypress-io/realtalkjs.webp)](http://realtalkjs.com)

I had the opportunity to talk with my good friends [John Papa](https://twitter.com/john_papa) and [Ward Bell](https://twitter.com/wardbell) about [Cypress.io](https://cypress.io) and end to end testing on the [Real Talk JavaScript](http://realtalkjs.com) podcast.  We talked about why I think end to end testing should get more attention, how I&apos;m using it in a current project, as well as the benefits it can offer developers.

I wasn&apos;t a huge fan of end to end testing in the past mainly because I hadn&apos;t used it much on projects, had the misconception that it was for &quot;dedicated testers&quot;, and felt like it would require a lot of time to get tests in place. My current project caused me to rethink end-to-end tests though. Due to some complex scenarios in an application I&apos;m working on I decided to look at end-to-end tests again and realized they could really help me out. Now I think end-to-end tests are an amazing way for developers and testers alike to ensure that code is actually doing what you expect it to do for end users. With a little practice you can get key tests up and running fairly quickly.

I use [Cypress.io](https://cypress.io) to write and run tests which allows you to easily walk through successful and failed tests to see what happened, analyze what the screen looked like at any given point, and much more. It&apos;s impressive and something I use a lot now to add more confidence to the code I&apos;m building. I find it especially useful when I need to test complex DOM scenarios that I don&apos;t feel are appropriate for unit tests.

Listen to the podcast at the following locations:

- [Real Talk JavaScript Website](http://realtalkjs.com/)
- [ITunes](https://itunes.apple.com/us/podcast/real-talk-javascript/id1437407176?mt=2)
- [Stitcher](https://www.stitcher.com/podcast/realtalk-javascript?refid=stpr)

[](http://realtalkjs.com﻿)

  

  

[](https://www.stitcher.com/podcast/realtalk-javascript?refid=stpr)

[](http://realtalkjs.com)</content:encoded></item><item><title>ngAir Podcast: Containerizing Angular Apps with Docker</title><link>https://blog.codewithdan.com/ngair-podcast-containerizing-angular-apps-with-docker/</link><guid isPermaLink="true">https://blog.codewithdan.com/ngair-podcast-containerizing-angular-apps-with-docker/</guid><pubDate>Tue, 11 Sep 2018 00:00:00 GMT</pubDate><content:encoded>I had the opportunity to chat with [Justin](https://twitter.com/schwarty), [Bonnie](https://twitter.com/bonnster75), [Alyssa](https://twitter.com/AlyssaNicoll), and [Austin](https://twitter.com/amcdnl) about Angular and Docker on the [ngAir podcast](https://angularair.com/) recently and really enjoyed talking with everyone. We talked about the benefits of containers from a developer and DevOps standpoint, how to create custom images with Dockerfiles, how to build/push/pull images, and how to run containers. We of course focused on the role that containers can play with Angular applications but the concepts apply to any front-end app (or back-end app for that matter).

Here are some of the key links mentioned in the podcast:

- ​[Containerizing Angular Apps with Docker Course on Pluralsight](http://pluralsight.com/courses/containerizing-angular-apps-docker) 
- [Docker for Web Developers Course on Pluralsight](https://www.pluralsight.com/courses/docker-web-development)
- [Angular Core Concepts Github Project (has the files shown in the podcast)](https://github.com/DanWahlin/Angular-Core-Concepts)

If you&apos;re interested in additional projects that have Docker in them check out my general [Github repo](https://github.com/DanWahlin).

You can watch the full ngAir episode below. Thanks to everyone there for having me on!

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/AahRR73LtOY&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;/iframe&gt;</content:encoded></item><item><title>New Pluralsight Course: Containerizing Angular Applications with Docker</title><link>https://blog.codewithdan.com/new-pluralsight-course-containerizing-angular-applications-with-docker/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-pluralsight-course-containerizing-angular-applications-with-docker/</guid><pubDate>Mon, 27 Aug 2018 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/new-pluralsight-course-containerizing-angular-applications-with-docker/containerizing_angular_docker.webp)](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/containerizing-angular-apps-docker)

I’m excited to announce the release of my next course on Pluralsight titled Containerizing Angular Applications with Docker! This course walks you through the process of containerizing front-end applications (with a focus on Angular) using Docker images and containers.

Here are a few questions this course will help you answer:

- Why “containerize” front-end applications?
- Should I consider using a CDN?
- How can I serve front-end applications created with Angular (or other libraries/frameworks) using nginx?
- How do I create a custom Dockerfile and custom image for my application?
- What is a multi-stage build and how can I use it to build my code and create a final image?
- Can I use Angular CLI features with custom Dockerfiles?
- How do I convert my custom image into a running container?
- How can I run multiple containers simultaneously?
- What options are available for running my container(s) in the cloud
- Much more…

Here’s a summary of the course…

# [Containerizing Angular Applications with Docker](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/containerizing-angular-apps-docker)

The Angular CLI provides a great way to build and run Angular applications locally, but what do you do when you need to build Angular in another environment? How do you deploy your Angular application between development, staging/QA, and production environments while ensuring that everything works as planned?

In this course, Containerizing Angular Applications with Docker, you’ll explore the role that Docker containers can play in simplifying the process of building and deploying front-end applications (with a focus on Angular). First, you’ll learn about the role of images and containers and image registries. Next, you’ll discover how to write custom multi-stage Dockerfiles for building Angular code. Then, you’ll delve into different server options such as nginx for running your Angular applications efficiently and consistently across environments. Finally, you’ll explore how to orchestrate multiple containers using Docker Compose and .yml files. By the end of this course, you’ll have the necessary knowledge to efficiently build and run Angular applications across multiple environments by utilizing Docker containers.

# Course Modules

1. Angular and Containers
    - Course Agenda
    - Angular and Containers
    - Prerequisites to Maximize Learning
    - Key Course Concepts
    - Why Use Containers?
    - Running the Angular Application Locally
    - Running the Angular Application in a Container
2. Creating a Multi-Stage Dockerfile
    - Creating a Multi-Stage Dockerfile
    - Creating the Angular Development Dockerfile
    - Multi-stage Dockerfiles
    - Creating the Angular Build Stage
    - Creating an nginx/Angular Stage
    - Creating an nginx/Angular Image
    - Using the VS Code Docker Extension
3. Deploying the Image and Running the Container
    - Deploying the Image and Running the Container
    - Running the Angular Container Locally
    - Running the Angular Container using the VS Code Docker Extension
    - Image Registry Options
    - Deploying the Angular Runtime Image to a Registry
4. Running the Angular Container in Azure
    - Running Multiple Containers
    - Running Multiple Containers
    - Running the Application with Docker Compose
    - Exploring the Docker Compose File
    - Options for Deploying Multiple Images/Containers

I hope you enjoy the [new course](https://pluralsight.pxf.io/c/1191765/424552/7490?u=https://www.pluralsight.com/courses/containerizing-angular-apps-docker)!</content:encoded></item><item><title>8 Tips to Maximize Your Productivity</title><link>https://blog.codewithdan.com/8-tips-for-maximizing-your-productivity/</link><guid isPermaLink="true">https://blog.codewithdan.com/8-tips-for-maximizing-your-productivity/</guid><pubDate>Fri, 15 Jun 2018 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/8-tips-for-maximizing-your-productivity/productive-1.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/06/productive-1.jpg)

Getting things done has always been a challenge for most people - and I include myself - regardless of where you work or what you do. No matter how hard some people try, they end up procrastinating tasks until the last minute. Some people simply focus better when they know they’re out of time and can’t procrastinate any longer. How many times have you put off working on a term paper in school until the very last minute? With only a few hours left your mental energy and focus seem to kick in to high gear especially as you realize that you either get the paper done now or risk failing. It’s amazing how a little pressure can turn into a motivator and allow our minds to focus on a given task.

Some people seem to specialize in procrastinating just about everything they do while others tend to be the “doers” who get a lot done and ultimately rise up the ladder at work. What’s the difference between these types of people? Is it pure laziness or are other factors at play? I think that some people are certainly more motivated than others, but I also think a lot of it is based on the process that “doers” tend to follow - whether knowingly or unknowingly.

While I’ve certainly fought battles with procrastination, I’ve always had a knack for being able to get a lot done in a relatively short amount of time. I think a lot of my “get it done” attitude goes back to the the strong work ethic my parents instilled in me at a young age. I remember my dad saying, “You need to learn to work hard!” when I was around 5 years old. I remember that moment specifically because I was on a tractor with him the first time I heard it while he was trying to move some large rocks into a pile. The tractor was big but so were the rocks and my dad had to balance the tractor perfectly so that it didn’t tip forward too far. It was challenging work and somewhat tedious but my dad finished the task and taught me a few important lessons along the way including persistence, the importance of having a skill, and getting the job done right without skimping along the way.

In this post I’m going to list a few of the techniques and processes I follow that I hope may be beneficial to others. Most of the ideas that follow came from learning and refining my daily work process over the years. However, since most of the ideas are common sense (at least in my opinion), I suspect they can be found in other productivity processes that are out there. Let’s start off with one of the most important yet simple tips: Start Each Day with a List.

## 1\. Start Each Day with a (realistic) List

[![](/images/blog/8-tips-for-maximizing-your-productivity/task_list.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/06/task_list.jpg)

What are you planning to get done today? Do you keep track of everything in your head or rely on your calendar? While most of us think that we’re pretty good at managing “to do” lists strictly in our head you might be surprised at how effective writing out lists can be. By writing out tasks you’re forced to focus on the most important tasks to accomplish that day, commit yourself to those tasks, and have an easy way to track what was supposed to get done and what actually got done.

**Start every morning by making a list of specific tasks** that you want to accomplish throughout the day (some people like to write them out the night before). I’ll even go so far as to fill in times when I’d like to work on tasks if I have a lot of meetings or other events tying up my calendar on a given day.

I’m not a big fan of using paper since I type a lot faster than I write (plus I write like a 3rd grader according to my wife), so I use the sticky notes feature available in Windows and Mac. Here’s an example of today&apos;s sticky note:

[![](/images/blog/8-tips-for-maximizing-your-productivity/2018-06-15_06-34-42.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/06/2018-06-15_06-34-42.png)

If you prefer &quot;to do&quot; lists instead of sticky notes, there are a ton of apps out there to help with that. One of my favorites is [Any.do](https://www.any.do/). What do you add to your list? That’s the subject of the next tip.

## 2\. Focus on Small Tasks

It’s no secret that focusing on small, manageable tasks is more effective than trying to focus on large and more vague tasks. When you make your list each morning **only add tasks that you can accomplish within a given time period**. For example, if I only have 30 minutes blocked out to work on an article I don’t list “Write Article”. If I do that I’ll end up wasting 30 minutes stressing about how I’m going to get the article done in 30 minutes and ultimately get nothing done. Instead, I’ll list something like “Write Introductory Paragraphs for Article”. The next day I may add, “Write first section of article” or something that’s small and manageable – something I’m confident that I can get done. You’ll find that once you’ve knocked out several smaller tasks it’s easy to continue completing others since you want to keep the momentum going.

In addition to keeping my tasks focused and small, I also make a conscious effort to limit my list to 4 or 5 tasks initially. I’ve found that if I list more than 5 tasks I feel a bit overwhelmed which hurts my productivity. It’s easy to add additional tasks as you complete others and you get the added benefit of that confidence boost of knowing that you’re being productive and getting things done as you remove tasks and add others.

## 3\. Getting Started is the Hardest (Yet Easiest) Part

[![](/images/blog/8-tips-for-maximizing-your-productivity/get_started.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/06/get_started.jpg)

I’ve always found that getting started is the hardest part and one of the biggest contributors to procrastination. Getting started working on tasks is a lot like getting a large rock pushed up and over a hill. It’s difficult to get the rock rolling at first, but once you manage to get it rocking some it’s really easy to get it rolling on its way to the bottom. As an example, I’ve written 100s of articles for technical magazines over the years and have really struggled with the initial introductory paragraphs. Keep in mind that these are the paragraphs that don’t really add that much value (in my opinion anyway). They introduce the reader to the subject matter and nothing more. What a waste of time for me to sit there stressing about how to start the article. On more than one occasion I’ve spent more than an hour trying to come up with 2-3 paragraphs of text. Talk about a productivity killer!

Whether you’re struggling with a writing task, some code for a project, an email, or other tasks, jumping in without thinking too much is the best way to get started I’ve found. I’m not saying that you shouldn’t have an overall plan when jumping into a task, but on some occasions you’ll find that if you simply jump into the task and **stop worrying about doing everything perfectly** that things will flow more smoothly. For my introductory paragraph problem I give myself 5 minutes to write out some general concepts about what I know the article will cover and then spend another 10-15 minutes going back and refining that information. That way I actually have some ideas to work with rather than a blank sheet of paper. If I still find myself struggling I’ll write the rest of the article first and then circle back to the introductory paragraphs once I’m done.

To sum this tip up: Jump into a task without thinking too hard about it. It’s better to to get the rock at the top of the hill rocking some than doing nothing at all. You can always go back and refine your work.

## 4\. Learn a Productivity Technique and Stick to It

There are a lot of different productivity programs and seminars out there being sold by companies. I’ve always laughed at how much money people spend on some of these motivational programs/seminars because I think that being productive isn’t that hard if you create a re-useable set of steps and processes to follow. That’s not to say that some of these programs/seminars aren’t worth the money of course because I know they’ve definitely benefited some people that have a hard time getting things done and staying focused.

One of the best productivity techniques I’ve ever learned is called the [Pomodoro Technique](https://en.wikipedia.org/wiki/Pomodoro_Technique) and it’s completely free. This technique is an extremely simple way to **manage your time without having to remember a bunch of steps, color coding mechanisms, or other processes**. The technique was originally developed by Francesco Cirillo in the 80s and can be implemented with a simple timer. In a nutshell here’s how the technique works:

1. Pick a small task to work on
2. Set the timer to 25 minutes and work on the task
3. Once the timer rings record your time
4. Take a 5 minute break
5. Repeat the process

Here’s why the technique works well for me:

- It forces me to focus on a single task for 25 minutes. In the past I had no time goal in mind and just worked aimlessly on a task until I got interrupted or bored. 25 minutes is a small enough chunk of time for me to stay focused. Any distractions that may come up have to wait until after the timer goes off. If the distraction is really important then I stop the timer and record my time up to that point.
- When the timer is running I act as if I only have 25 minutes total for the task (like you’re down to the last 25 minutes before turning in your term paper….frantically working to get it done) which helps me stay focused and turns into a “beat the clock” type of game. It’s actually kind of fun if you treat it that way and really helps me focus on a the task at hand. I automatically know how much time I’m spending on a given task (more on this later) by using this technique.
- I know that I have 5 minutes after each pomodoro (the 25 minute sprint) to waste on anything I’d like including visiting a website, stepping away from the computer, etc. which also helps me stay focused when the 25 minute timer is counting down.

There are certainly many other productivity techniques and processes out there (and a [slew of books](https://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Dstripbooks&amp;field-keywords=productivity) describing them), but the Pomodoro Technique has been the simplest and most effective technique I’ve ever come across for staying focused and getting key tasks done each day. While you don&apos;t need an app to use this technique, there are [several apps out there](https://www.google.com/search?source=hp&amp;ei=ldcjW6bTDpPS8APjrafoCA&amp;q=pomodoro+apps&amp;oq=pomodoro+apps&amp;gs_l=psy-ab.3..0l6.1519.3140.0.3261.15.14.0.0.0.0.110.1094.11j2.14.0....0...1.1.64.psy-ab..1.14.1170.6..35i39k1j0i131k1j0i131i20i264k1j0i20i264k1.77.Bz5iYj9LANM) if you&apos;re interested.

## 5\. Persistence is Key

[![](/images/blog/8-tips-for-maximizing-your-productivity/persistence.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/06/persistence.jpg)

Getting things done is great but one of the biggest lessons I’ve learned in life is that persistence is key especially when you’re trying to get something done that at times seems insurmountable. Small tasks ultimately lead to larger tasks getting accomplished, however, it’s not all roses along the way as some of the smaller tasks may come with their own share of bumps and bruises that lead to discouragement about the end goal and whether or not it is worth achieving at all.

I’ve been on several long-term projects over my career as a software developer (I have one personal project going right now that fits well here) and found that repeating, “Persistence is the key!” over and over to myself really helps. Not every project turns out to be successful, but if you don’t show persistence through the hard times you’ll never know if you succeeded or not. Likewise, if you don’t persistently stick to the process of creating a daily list, follow a productivity process, etc. then the odds of consistently staying productive aren’t good.

## 6\. Track Your Time

[![](/images/blog/8-tips-for-maximizing-your-productivity/time_management.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/06/time_management.jpg)

How much time do you actually spend working on various tasks? If you don’t currently track time spent answering emails, on phone calls, browsing the Internet, and working on various tasks then you might be surprised to find out that a task that you thought was going to take you 30 minutes ultimately ended up taking 2 hours. If you don’t track the time you spend working on tasks how can you expect to learn from your mistakes, optimize your time better, and become more productive? That’s another reason why I like the Pomodoro Technique – it makes it easy to stay focused on tasks while also tracking how much time I’m working on a given task.

## 7\. Eliminate Distractions

I blogged about this tip several years ago but wanted to bring it up again. If you want to be productive (and ultimately successful at whatever you’re doing) then you can’t waste a lot of time playing games or be on Twitter, Facebook, or other time sucking websites. If you see an article you’re interested in that has no relation at all to the tasks you’re trying to accomplish then bookmark it and read it when you have some spare time (such as during a pomodoro break). Fighting the temptation to check your friends’ status updates on a social media site? Resist the urge and realize how much those types of activities are hurting your productivity and taking away from your focus.

I’ll admit that eliminating distractions is still tough for me personally and something I have to constantly battle. But, I’ve made a conscious decision to cut back on my visits and updates to social media and other sites.

Ultimately it comes down to self-discipline and how badly you want to be productive and successful in your career, life goals, hobbies, or whatever you’re working on. Rather than having your homepage take you to a time wasting news site, game site, social site, or others, how about adding something like the following as your homepage? Every time your browser opens you’ll see a personal message which helps keep you on the right track. You can [download my ubber-sophisticated homepage here](https://www.dropbox.com/s/5jsro0gfag5q89s/Dont_Waste_Time.zip?dl=0) if interested.

![Don&apos;t Waste Time Clock Image](/images/blog/8-tips-for-maximizing-your-productivity/2018-06-15_06-22-33.webp)

If you want a tool to help eliminate distractions (while also helping you stay focused for a set period of time) check out [Forest: stay focused, be present](https://forestapp.cc/en/). You can use it with iOS, Android or as a Chrome extension. There are a lot of [additional tools out there](https://www.google.com/search?source=hp&amp;ei=7uYiW-WSDqb_0gLCorewBg&amp;q=focus+block+sites&amp;oq=focus+block+sites) to block you from going to time wasting sites while you&apos;re trying to focus as well.

## 8\. Learn to Relax

[![](/images/blog/8-tips-for-maximizing-your-productivity/meditation.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/06/meditation.jpg)

This final tip is arguably one of the most important to maximize your productivity in my opinion. There are [many studies](https://scholar.google.com/scholar?hl=en&amp;as_sdt=0%2C3&amp;q=relaxation%2C+stress+and+work+productivity+and+creativity&amp;btnG=) out there touting the benefits of stress and anxiety reduction on productivity and creativity. We all know that unhealthy stress and anxiety can reduce our productivity but how many of us take action? In today&apos;s stressed-out world it&apos;s more important than ever to work on this skill. It&apos;s something that takes practice for many of us and requires a lot of effort, but it&apos;s totally worth the time investment in my experience. It may sound a bit &quot;out there&quot;, but **learning to quiet your mind might be one of the best things you&apos;ll ever do for yourself.**

So how do you learn to relax? Every person is unique, so I can&apos;t necessarily answer the question for you personally. What I can do is share a few techniques that I like to use. I&apos;m going to be as brief as possible here since this is a big topic (maybe I&apos;ll do another post that focuses on this and how it&apos;s change my life for the better).

While some people are naturally relaxed and calm most of the time, I&apos;ve always been a bit of a high stress/anxious person that tends to worry about things that are often out of my control. In 2nd grade I was given the &quot;Head Worry Wart&quot; award at the end of the school year (no joke - I actually received an award certificate for that believe it or not). At the time I was proud that I received an award, but now I laugh a bit as I look back and realize that the &quot;award&quot; was an early warning sign that I needed to relax and worry less.

For nearly 40 years I felt like I had no control over my stress and anxiety/worry and reached a point where I felt like I was at the bottom of a hole that I could never get out of - I just felt trapped in my own stress and didn&apos;t know how to stop or even reduce it  aside from exercising. I finally reached a point where I realized I had to make a change. I&apos;ve run my own software and training company for nearly 20 years now and while I wouldn&apos;t change that aspect of my life, it does add some additional stress and worry. I realized that reducing stress/anxiety/worry was a skill that I needed to develop and enhance. As with learning any skill, practice and hard work is required.

I read book after book, article after article, listened to audiobooks, watched YouTube videos about reducing stress/anxiety/worry, talked with people who I felt had already mastered the skill, and finally decided that the key to reducing stress for me was learning to be [mindful](https://en.wikipedia.org/wiki/Mindfulness) about how I was feeling. If you&apos;re new to [mindfulness](https://en.wikipedia.org/wiki/Mindfulness), in a nutshell it&apos;s learning to be aware of how you&apos;re feeling in the present moment. That means being aware of how your body feels (emotions, any pain you&apos;re feeling, your heart rate, etc.) as well as what thoughts you&apos;re thinking.

I wasn&apos;t truly aware any of these things aside from the fact that I felt stressed out a lot. One negative thought could trigger a cascade of negative thoughts which spiraled out of control at times, leading to a lot of unnecessary stress and worry, and a huge decrease in productivity. My mind was often like a runaway train that I didn&apos;t even realize was running away. In talking with others I&apos;ve come to realize that I&apos;m not alone here.

While learning to be more mindful is a big topic that many [articles](https://flipboard.com/@dwahlin/the-mind-magazine-e28fsudey), [books](https://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Dstripbooks&amp;field-keywords=mindfulness), and [scientific studies](https://scholar.google.com/scholar?q=mindfulness+research+studies&amp;hl=en&amp;as_sdt=0&amp;as_vis=1&amp;oi=scholart) have addressed, here&apos;s what I ended up doing:

1. I practiced monitoring and analyzing what I was thinking about and how I was feeling in different situations every day. This is extremely hard to do until you practice it day after day especially given that our brains normally jump around a lot (something called the &quot;monkey mind&quot;). You have to consistently remind yourself to &quot;check-in&quot; throughout the day and monitor what you&apos;re thinking about and how you&apos;re feeling.If I find I&apos;m ruminating over something stupid (especially things that are made up and/or out of my control), I shift my thoughts to something more beneficial, take a few deep breathes, or use other techniques. If I sense that I&apos;m getting upset over a situation (code that isn&apos;t working, a difficult topic to learn, a challenging client or meeting, etc.) I note how I&apos;m feeling and employ some of the tactics that help me relax and get back to being productive.
2. To get better at mindfulness and controlling my thoughts I started actively quieting my mind by [meditating](https://en.wikipedia.org/wiki/Meditation) on a consistent basis. I&apos;ll have to admit that I laughed at the concept of meditation earlier in life and thought it was for monks (no offense to any monks out there - I now have a ton of respect for the skills you&apos;ve learned :-)). In hindsight, I just didn&apos;t know what I didn&apos;t know when it came to meditation.Meditation was extremely hard for me at first. Try sitting in a quiet room for 60 seconds and keeping your mind clear of any thoughts - don&apos;t think anything! For newbies, within a few seconds your mind will start thinking about something and you won&apos;t even realize it. By learning to quiet your mind you can relax and develop greater productivity and creativity as a result. You&apos;ll also learn to be more mindful about what you&apos;re thinking and feeling. This doesn&apos;t mean you ignore any thoughts that come up while meditating. Instead you note them and go back to focusing on something else like your breathing, a mantra, etc. You can find a list of meditation techniques [here](https://liveanddare.com/types-of-meditation).
3. Use an app to get started with meditation rather than trying to go it alone. There are [many meditation techniques](https://liveanddare.com/types-of-meditation) out there as mentioned earlier, but if you&apos;re interested in getting started meditating I recommend apps such as [Insight Timer](https://insighttimer.com/), [Welzen](https://welzen.org/), [Headspace](https://www.headspace.com/), or [Aware](https://awaremeditationapp.com/). I alternate between several apps and even use breathing apps like [Pranayama](https://itunes.apple.com/us/app/health-through-breath-pranayama/id341935130?mt=8). Here are some of the apps I have loaded on my phone:

[![](/images/blog/8-tips-for-maximizing-your-productivity/2018-06-15_07-25-45-576x1024.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/06/2018-06-15_07-25-45.png)

So has meditating and being more mindful helped my productivity? I can say &quot;Yes&quot; with 100% confidence. It&apos;s helped me eliminate a ton of stress, anxiety, and worry and made me much more productive (especially in stressful situations) and more creative overall. When I&apos;m stuck on something, I recognize how I&apos;m feeling and what I&apos;m thinking and typically step away for a few minutes (I used to suffer through the problem until it was solved which wasn&apos;t very efficient). I&apos;ve seen many other benefits as well in my life but that&apos;s a topic for another blog post.

## Summary

Is there a single set of steps that if followed can ultimately lead to productivity? I don’t think so since one size has never fit all. Every person is different, works in their own unique way, and has their own set of motivators, distractions, and more. If you learn what steps work best for you and gradually refine them over time, you can come up with a personal productivity process that can serve you well. Productivity is definitely an “art” that anyone can learn with a little practice and persistence. Start with one productivity technique to get things kicked-off and then add additional techniques as needed.

You’ve seen some of the steps that I personally like to follow and I hope you find some of them useful in boosting your productivity. If you have others you use please leave a comment. I’m always looking for ways to improve.</content:encoded></item><item><title>Video: Microservices with Docker, Angular, and ASP.NET Core</title><link>https://blog.codewithdan.com/video-microservices-with-docker-angular-and-asp-net-core/</link><guid isPermaLink="true">https://blog.codewithdan.com/video-microservices-with-docker-angular-and-asp-net-core/</guid><pubDate>Sun, 13 May 2018 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/video-microservices-with-docker-angular-and-asp-net-core/2018-04-05-21.52.32-300x225.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/05/2018-04-05-21.52.32.jpg)I recently had the opportunity to speak to a group in Denver, Colorado about Microservices, Docker, Angular, and ASP.NET Core which was a lot of fun. [Briebug](https://twitter.com/BrieBugSoftware) sponsored the event at [Alamo Drafthouse](https://drafthouse.com/) and we had a great turnout!  Thanks to Briebug for organizing the event (really appreciate Anne, Jesse, and Bill for everything they did) and for everyone that came out to see the talk.

Here&apos;s an overview of what the talk was all about:

Learn about the role that microservices can play in today&apos;s enterprise environments in this talk by Dan Wahlin. Learn what a microservice is, how an Angular client can call into &quot;microservices&quot;, how to create RESTful microservices using ASP.NET Core and Node.js, and the role Docker containers can play in the overall process. Throughout the talk you’ll hear about the pros and cons of microservices and see the benefits that Docker can provide. If you’ve wondered about microservices, Docker or other technologies then this talk will provide you with a solid jumpstart.

## Microservices with Docker, Angular, and ASP.NET Core

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/acykoYAgBsA&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;/iframe&gt;</content:encoded></item><item><title>Getting Started with Machine Learning using TensorFlow and Docker</title><link>https://blog.codewithdan.com/getting-started-with-machine-learning-using-tensorflow-and-docker/</link><guid isPermaLink="true">https://blog.codewithdan.com/getting-started-with-machine-learning-using-tensorflow-and-docker/</guid><pubDate>Thu, 03 May 2018 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/getting-started-with-machine-learning-using-tensorflow-and-docker/2018-05-03_11-24-40-1024x1016.webp)Machine Learning (ML) gets a lot of press now days and for good reason. It can be used to gain insight into areas that were difficult to tap into even a few years ago. Want to know what type of object is shown in a picture? [Machine learning](https://en.wikipedia.org/wiki/Machine_learning) can tell you. Need help predicting the next big stock to buy based on historical trends? Machine learning can help out there as well. The sky is the limit! Machine Learning (and more specifically a technique for implementing it called [Deep Learning (DL)](https://en.wikipedia.org/wiki/Deep_learning)) can help analyze financial information, filter spam, examine healthcare records, assess security exploits, perform face recognition, enable driver-less cars, and much more. It&apos;s one of the key drivers moving artificial intelligence (AI) forward.

Although I spend most of my time in the web development arena, I&apos;ve been dedicating a lot of time over the past few months researching machine learning concepts. It&apos;s required me to refresh my linear algebra, matrix operations, and stats knowledge some, but it&apos;s been a fun ride overall. What&apos;s been really fun is researching [TensorFlow](https://www.tensorflow.org/) (an open source machine learning framework) and related frameworks. It&apos;s a big learning curve, but ML/DL frameworks can abstract away a lot of the math and algorithms and let you do some amazing things with a minimal amount of code.

In this post I wanted to show a simple example of getting started with [TensorFlow](https://www.tensorflow.org/) that doesn&apos;t require learning Python or another language AND doesn&apos;t require you to install anything on your machine aside from [Docker Community Edition](https://www.docker.com/community-edition) to get started. You&apos;ll of course have to use Python or another language at some point if you write any custom Machine Learning programs, but we won&apos;t worry about that in this post. Since Docker containers are used, once you&apos;re done with the following example you can remove it from your machine instantly. Let&apos;s get started!

## Image Identification with TensorFlow and Docker

The general demonstration shown here is covered in several places on the web, but the steps below allow you to try it out quickly and easily using Docker. The goal is to examine a bunch of pictures of flowers to create a training model about the flowers. After doing that you&apos;ll pass a picture of a flower through TensorFlow and it will tell you what type of flower it is based on the training data it was given.

 

1\. Install [Docker Community Edition](https://www.docker.com/community-edition) if you don&apos;t have it on your machine already.

2\. Run the following command to download the TensorFlow image and run the container:

```
docker run -it -p 8888:8888 tensorflow/tensorflow
```

Note: Port 8888 is for running TensorFlow programs from [Jupyter notebook](https://jupyter.org/) (a way to share documents with live code included). Although we could use the Tensorflow container directly (via &apos;docker exec&apos;) we&apos;re going to leverage Jupyter notebook here).

3\. A link will be displayed in the console. Visit http://localhost:8888/?token=&lt;token&gt; (use the full link shown in your console) to see the Jupyter notebook site.

4\. Once you&apos;re on the local website select **New** ==&gt; **Terminal** from the menu options on the page. A new terminal/console window will load.

5\. Enter the following commands in the console to install Git and Nano. You&apos;ll be prompted to continue a few times so select &quot;enter&quot; or &quot;yes&quot; as appropriate.

```
add-apt-repository ppa:git-core/ppa
apt update
apt install git nano
```

6\. Clone the following TensorFlow repository by running the following command:

```
git clone https://github.com/googlecodelabs/tensorflow-for-poets-2
```

This will provide the scripts needed to train the model so that we can identify what&apos;s in a picture.

7\. Move into the new **tensorflow-for-poets-2** folder:

```
cd tensorflow-for-poets-2
```

8\. Type the **ls** command in the terminal window to list what&apos;s in the folder. Note that there are **scripts** and **ts\_files** folders.

[![](/images/blog/getting-started-with-machine-learning-using-tensorflow-and-docker/2018-05-03_12-08-20-1024x125.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/05/2018-05-03_12-08-20.png)

9\. Download some training images and place them in the **tensorflow-for-poets-2** folder by running the following command (make sure you copy the entire command....you might have to scroll right to see it all). These images are provided by the TensorFlow project and include images of various flowers.

```
curl http://download.tensorflow.org/example_images/flower_photos.tgz | tar xz -C tf_files
```

10\. Run the following command to start the training process (ensure that you copy the full command text). This will take awhile to run while all of the images are analyzed so feel free to go grab a snack while it&apos;s running.

```
python scripts/retrain.py --bottleneck_dir=tf_files/bottlenecks --how_many_training_steps 500 --model_dir=tf_files/inception --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --image_dir tf_files/flower_photos
```

11\. Run the following command to create a new python file:

```
touch classify_image.py
```

12\. Open the file in Nano (or another terminal editor if you have a preference):

```
nano classify_image.py
```

13\. Run the following curl command to copy a python script onto the machine that will allow us to compare an image to the trained model:

```
curl https://gist.githubusercontent.com/DanWahlin/2b0186897e8e5ab7be17c0d8ca86b569/raw/4d47eccb47c386814dfe1e387c81de9afaad6585/classify_image.py -O
```

&lt;script src=&quot;https://gist.github.com/DanWahlin/2b0186897e8e5ab7be17c0d8ca86b569.js&quot;&gt;&lt;/script&gt;

14\. Run **classify\_image.py** and pass it an image that you&apos;d like to identify (I passed it an existing rose image):

```
python classify_image.py tf_files/flower_photos/roses/17051448596_69348f7fce_m.jpg
```

 

15\. Once it&apos;s done you should see information about what the image is:

[![](/images/blog/getting-started-with-machine-learning-using-tensorflow-and-docker/2018-05-03_12-45-41-1024x187.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/05/2018-05-03_12-45-41.png)

In this example it detected a rose (which is correct) with 82% accuracy. With additional training images the accuracy goes up.

 

16\. Close the Jupyter notebook webpage.

 

17\. Stop the container by pressing **CTRL + C**.

 

18\. To remove the container first run the following command to get the container ID:

```
docker ps
```

 

19\. Now remove the container by passing the first few characters of the ID to the following command:

```
docker rm [container_id]
```

20\. Now remove the TensorFlow image by first locating the ID:

```
docker images
```

 

21\. Remove the image using the following command and everything is now gone from you machine!

```
docker rmi [image_id]
```

## Summary

This simple example only scratches the surface of what Machine Learning can do but provides a fairly straightforward example of getting started. While this type of Machine Learning can be done more easily using some of the Machine Learning cloud services available (such as [Azure&apos;s Cognitive Services](https://azure.microsoft.com/en-us/services/cognitive-services/directory/vision/), [Google Cloud&apos;s AutoML](https://cloud.google.com/automl/), and [Amazon&apos;s Rekognition](https://aws.amazon.com/rekognition/) to name a few), I always enjoy learning more about how a process works before jumping into other options.

[![](/images/blog/getting-started-with-machine-learning-using-tensorflow-and-docker/guage.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/05/guage.jpg)While you may not be interested in classifying flowers (unless that&apos;s your thing :-)), there are a lot of great uses for image classification that can help with automation. For example, I&apos;ve seen an example of an app that can convert pictures taken from industrial gauges out in the field into actual numbers that are reported on a dashboard without installing any new equipment (aside from a camera that points at the gauges of course). Images from the field are run through an ML process that then converts the gauge positions to numbers. It&apos;s amazingly powerful and as mentioned....the sky&apos;s the limit!

If you&apos;re interested in learning more about Machine Learning and AI, check out the new [Flipboard Magazine](https://flipboard.com/@dwahlin/artificial-intelligence-and-machine-learning-j55khak1y) I started (it&apos;s free to access). As I find new articles on the subject I add them to the magazine.</content:encoded></item><item><title>Upgrading an Application to Angular 6: Step By Step</title><link>https://blog.codewithdan.com/upgrading-an-application-to-angular-6-step-by-step/</link><guid isPermaLink="true">https://blog.codewithdan.com/upgrading-an-application-to-angular-6-step-by-step/</guid><pubDate>Thu, 03 May 2018 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/upgrading-an-application-to-angular-6-step-by-step/angular-e1459634290861.webp)](https://blog.codewithdan.com/wp-content/uploads/2016/03/angular-e1459634290861.png)Angular 6 is out and it offers some great [new functionality in the CLI and overall framework](https://blog.angular.io/version-6-of-angular-now-available-cc56b0efa7a4). One of the biggest new features (IMO anyway) is the CLIs support for schematics and the `ng new library` command to create and publish libraries (a big pain point that is now simplified). But, I digress....this post is about upgrading an application. [Click here](https://blog.angular.io/version-6-of-angular-now-available-cc56b0efa7a4) if you want a quick look at all of the great new features.

I have a large project I&apos;m working on that&apos;s still in development so I decided to take the plunge and try out some of the new features to upgrade from Angular 5 to Angular 6. Here are the specific steps I went though along with some commentary along the way. Additional upgrade documentation can be [found here](https://update.angular.io/).

## Upgrade Steps

 

**1.** The first step was to update to the latest version of the Angular CLI:

`npm install -g @angular/cli`

 

**2.** From there I went into my project&apos;s root folder (where **package.json** lives) and ran the following command.

`ng update`

 

This command updates your **package.json** file to the latest Angular-related package versions. I was happy to see that it left all of my other dependencies alone. One of the unsung heroes of the CLI is the ability to do dry runs using the `--dry-run` or `-d` switches. You can use that here to see an overview of what will happen without actually affecting anything in your project. I use `-d` all the time when generating new files with the CLI to see what impact a given command will have.

Note that there are some other options you can pass to `ng update` which I&apos;ll mention below.

 

**3.** From there I ran `ng build` just to see where things stood code-wise. Everything built perfectly....no....no it didn&apos;t build perfectly. False alarm. OK - I have some work to do but fortunately it all seems related to some RxJS 6 changes. More on that in a moment.

 

**4.** I wanted to see any big differences in files like **angular-cli.json** between my project and projects created with the new version of the CLI so I ran `ng new my-project --routing` to create a new project. I noticed that **angular-cli.json** was renamed to **angular.json**. The JSON data is quite different between the two versions so I kept my older **angular-cli.json** file around temporarily and copied the new **angular.json** file into my project. I noted differences between the two, made the appropriate changes (such as the project name), and then deleted **angular-cli.json**. After doing a build I received the following error. It can&apos;t be all roses I guess. :-)

`Could not find module &quot;@angular-devkit/build-angular&quot;`

Note that you don&apos;t have to update your **angular-cli.json** file. Things will still build and work as expected without doing that. I did it manually as mentioned just to explore things more, but you can also run this command to update **package.json** versions AND change **angular-cli.json** to the newer **angular.json** format:

`ng update @angular/cli`

 

**5.** I figured that a package must be missing so I looked at the new project I generated earlier and noticed it had the following dev dependency in the **package.json** file:

`&quot;@angular-devkit/build-angular&quot;: &quot;~0.6.0&quot;`

 

**6.** I added this new dependency into my project&apos;s **package.json** file and after running `npm i` tried out `ng build` again. While the previous error was gone, I hit a ton of RxJS errors (which was expected since I saw them earlier). RxJS has been changed in v6 to support better tree shaking and to simplify some of the import statements and some of these changes were causing the build errors as a result. You can find the [RxJS upgrade guide here](https://github.com/ReactiveX/rxjs/blob/master/MIGRATION.md). The errors mention not being able to find an **rxjs-compat** package. You can use that package if you want to leave your existing RxJS code &quot;as is&quot; and not take the time to move it completely to v6+. I had thought that the `ng update` command would automatically add the **rxjs-compat** package but it was nowhere to be found in my updated **package.json** file. While I could easily add it, I decided I was going all the way - out with the old and in with the new! Keep in mind that if you&apos;re using 3rd party libraries that rely on RxJS you&apos;ll need to add **rxjs-compat** into your **package.json** for now until the library is updated to use RxJS 6+.

 

**7.** The errors that were now showing (and there were a bunch of them) were due to changes in how RxJS symbols are imported with RxJS 6+. I had already moved to using piped operators (an optional change in RxJS v5.5) so I didn&apos;t have to change how I was using operators such as **map**, **catchError**, **filter**, **concatMap**, etc. You&apos;ll want to move to [piped operators](https://github.com/ReactiveX/rxjs/blob/master/MIGRATION.md#pipe-syntax) if you&apos;re still using the older chained operators.

I did have to make the following changes to all of my RxJS imports though throughout the project ([RxJS import changes can be found here](https://github.com/ReactiveX/rxjs/blob/master/MIGRATION.md#import-paths)). You&apos;ll notice that the changes are simple (and quite predictable). See the note below for an automated way of doing this update.

 

**import { Subscription } from &apos;rxjs/Subscription&apos;** was changed to **import { Subscription } from &apos;rxjs&apos;**

 

**import { Subject } from &apos;rxjs/Subject&apos;** was changed to **import { Subject } from &apos;rxjs&apos;**

 

**import { BehaviorSubject } from &apos;rxjs/BehaviorSubject&apos;** was changed to **import { BehaviorSubject } from &apos;rxjs&apos;**

 

**import { Observable } from &apos;rxjs/Observable&apos;** was changed to **import { Observable } from &apos;rxjs&apos;**

 

**import { of } from &apos;rxjs/observable/of&apos;** was changed to **import { of } from &apos;rxjs&apos;**

 

Note: [Igor Minar](https://twitter.com/igorminar) (Angular team core member) let me know about a package called [rxjs-tslint](https://www.npmjs.com/package/rxjs-tslint) that can help automate the process of moving from RxJS 5 to RxJS 6. You can get more details about it [here](https://www.npmjs.com/package/rxjs-tslint). After installing it you can run an **rxjs-5-to-6-migrate** command or update your **tslint.json** file to run it as part of your linting process.

 

**8.** POW! Everything compiled at that point which was good to see especially since I hadn&apos;t spent a ton of time on the migration. But, compiling is one thing. Running in the browser without any new errors is quite another thing. So did it work???

 

**9.** YEEEESSSS! The app loaded and worked as expected - no errors in the dev console. I ran my [Cypress.io](https://www.cypress.io/) end-to-end tests and everything checked out.

**10.** Although the project worked at this point, there was still one additional change I could (optionally) make related to services. v6 offers a new option that saves you having to manually define a service&apos;s provider in a module (or using the CLI with `--module` as you generate services). This new feature also adds better tree shaking support as well to the build process. I decided to remove my existing core module providers and go with the new `providedIn` property available in the Injectable decorator. This is how services look now when using `ng g service &lt;myServiceName&gt;`:

 

```

import { Injectable } from &apos;@angular/core&apos;;

@Injectable({
  providedIn: &apos;root&apos;
})
export class DataService {

  constructor() { }
}
```

That changed my core module providers from:

```

providers: [
               CoursesDataService, EditService, DisplayModeService,
               UtilitiesService, FilterService, SorterService, Cloner,
               RouteParamsService, UsersDataService, UserProgressService,
               EventBusService, TemplateService,
               {
                  provide: HTTP_INTERCEPTORS,
                  useClass: AuthInterceptor,
                  multi: true,
                }
            ]
```

**TO:**

```

providers: [
               {
                  provide: HTTP_INTERCEPTORS,
                  useClass: AuthInterceptor,
                  multi: true,
                }
            ]
```

 

While that&apos;s a very minor change, the big benefit is in some of the enhanced tree shaking that the build process can do now.

 

## Summary

 

While each project always has its own unique challenges, I found the migration to Angular 6 nice and smooth overall. Some of the RxJS changes were a bit painful, but I like the changes due to better tree shaking and fewer import statements moving forward. I also could&apos;ve chosen to use the **rxjs-compat** library to keep the existing RxJS code working &quot;as is&quot;. I didn&apos;t do that, but it&apos;s a nice option to have if you want to move forward to v6 but not worry about changing imports and chained operators (if you aren&apos;t using piped operators yet).

Updating to the new **angular.json** file format also adds benefits that I can leverage down the road such as [CLI workspaces](https://blog.angular.io/version-6-of-angular-now-available-cc56b0efa7a4), `ng generate library` support, and more. This project didn&apos;t have many external dependencies on 3rd party libraries and I suspect some of those may cause migration challenges for other projects. That&apos;s a key scenario where you&apos;ll likely need to use **rxjs-compat** if the target library is using RxJS. But, in the end I spent less than 2 hours doing the upgrade plus writing this post while doing it....not bad at all!</content:encoded></item><item><title>My Interview on the IT Career Energizer Podcast: Career Tips and Life Lessons Learned</title><link>https://blog.codewithdan.com/my-interview-on-the-it-career-energizer-podcast-career-tips-and-life-lessons-learned/</link><guid isPermaLink="true">https://blog.codewithdan.com/my-interview-on-the-it-career-energizer-podcast-career-tips-and-life-lessons-learned/</guid><pubDate>Mon, 23 Apr 2018 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/my-interview-on-the-it-career-energizer-podcast-career-tips-and-life-lessons-learned/2018-04-23_22-06-10-1024x162.webp)](http://itcareerenergizer.com/e54/)

I had the opportunity to talk with Phil Burgess on the [IT Career Energizer podcast](http://itcareerenergizer.com/e54/) recently and really enjoyed the discussion. I&apos;m used to talking about technical topics when I&apos;m invited to a podcast, but this interview was completely different. Instead of getting technical, we focused on career tips, the importance of being willing to learn, life lessons learned, and some of the mental barriers that we can all overcome to advance our career, our life, and our overall happiness.

We addressed the following questions (and others) in the podcast:

- Unique Career Tips
- Worst Career Moment
- Career Highlight/Greatest Successes
- What Excites You About the Future of a Career in IT
- What Attracted You to a Career in IT in the First Place?
- A Parting Piece of Career Advice

I&apos;ll admit that some of the tips I mention may seem a bit &quot;old fashioned&quot; to some people especially in the age of social media. Things like &quot;always be honest&quot; and &quot;treat people how you&apos;d like to be treated&quot; for example. These concepts are the cornerstone of my business and how I strive to live life in general. I view them as essential rules to live by if you want to advance your career and build lasting relationships. Since social media has provided a way for some to hide behind a computer or phone while throwing out insults, living by some of these guidelines is more important than ever in my opinion.

Another big part of my life now days centers around mindfulness, meditation, and gaining insight into what I&apos;m thinking about. When I was younger I had no idea what was really going on in my head - emotions ruled each day and I definitely had what some call the &quot;monkey mind&quot;. I now realize how much impact negative and fearful thoughts flying around in my head held me back. Phil and I were able to discuss this some as well as a few techniques I use to be more mindful and aware so that I can direct thoughts in the direction I want to go in life rather than being pushed in whatever direction my mind was randomly going on a given day. One of my favorite quotes from the podcast is:

&gt; “It’s amazing how much we hold ourselves back by what we think”

Thanks to Phil for having me on the podcast and I hope you [enjoy the listen](http://itcareerenergizer.com/e54/)!

[![Visit podcast site](/images/blog/my-interview-on-the-it-career-energizer-podcast-career-tips-and-life-lessons-learned/2018-04-23_22-16-55.png)](http://itcareerenergizer.com/e54/)</content:encoded></item><item><title>&quot;Containerizing&quot; Angular with Docker: My ng-conf Talks and Overall Experience</title><link>https://blog.codewithdan.com/containerizing-angular-with-docker-my-ng-conf-talks-and-overall-experience/</link><guid isPermaLink="true">https://blog.codewithdan.com/containerizing-angular-with-docker-my-ng-conf-talks-and-overall-experience/</guid><pubDate>Sat, 21 Apr 2018 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/containerizing-angular-with-docker-my-ng-conf-talks-and-overall-experience/2018-04-18-11.46.49-1024x768.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/04/2018-04-18-11.46.49.jpg)[ng-conf 2018](https://www.ng-conf.org/) is officially over....too bad it can&apos;t go on forever. It&apos;s such a great conference that you don&apos;t want it to end. In addition to hearing a lot of high-quality talks from the Angular team and other awesome speakers, I had the opportunity to hang out with [John Papa](https://twitter.com/john_papa) a lot (one of my best friends) as well as several other close friends, and made many new friends throughout the week as well (thanks to my buddy [Brian Clark](https://www.clarkio.com/2017/06/06/new-start-at-msft/) for the picture to the right). Here&apos;s a little information about the conference and my overall experience. If you&apos;re only interested in videos of the talks I gave [scroll to the bottom](#talks) to find those.

ng-conf had an 80&apos;s theme this year which was a lot of fun. They played 80&apos;s movies like &quot;Back to the Future&quot; in a dedicated movie room, had a Dungeon and Dragons/Star Wars/other games night, gave out a ton of cool swag, had a conference party with an 80&apos;s band, a Back to the Future style DeLorean, and food trucks, did a &quot;Ready Player One&quot; movie night at the theater, and had 80&apos;s video games throughout the halls of the conference center. Plus, the hotel accommodations there are top notch (I stay in a lot of hotels due to business travel so I can vouch for how nice Little America/Great America is in Salt Lake City). What a blast!

[![](/images/blog/containerizing-angular-with-docker-my-ng-conf-talks-and-overall-experience/2018-04-17-19.44.02-1024x768.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/04/2018-04-17-19.44.02.jpg)

Throughout the week I had a lot of fun discussions with people about tech, life, and more and got to hang out with some devs I&apos;ve worked with while doing training/consulting/architecture (people from J.B. Hunt, Crowne Castle and many others I&apos;ve had in [my training classes](https://codewithdan.com)). If you work with Angular and haven&apos;t been to [ng-conf](https://www.ng-conf.org/) I highly recommend it. If you missed it this year all hope is not lost. You can also attend [AngularMix](https://angularmix.com/#!/) in October at Universal Studios Orlando (it&apos;s an enterprise-focused conference). The Angular team and a ton of great speakers will be there as well and you can hit the Universal parks too...which is super fun!

At ng-conf I did a 2-day Angular workshop with John Papa. We had a big group and people were really into the content and asked a lot of great questions. Thanks to everyone who attended the workshop and thanks to [Ward Bell](https://twitter.com/wardbell) and [Sander Ellis](https://twitter.com/esosanderelias) for helping us out during the workshop labs.

[![](/images/blog/containerizing-angular-with-docker-my-ng-conf-talks-and-overall-experience/2018-04-17-14.00.48-1024x768.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/04/2018-04-17-14.00.48.jpg)

The funniest experience at the workshop came from Ward Bell. I decided to take a bathroom break while John was talking and realized I had my mic on me when I left the room....a big &quot;no no&quot; for a speaker since you can never trust the &quot;off&quot; light on the mic (there are plenty of funny stories about this type of scenario so I always avoid any issues). I saw Ward outside in the hallway and asked if he could watch the mic for me until I came back. He was happy to watch it - because he&apos;s just a nice guy. I came back, picked up the mic, thanked Ward and went back into the room (Ward said nothing). As I walked into the room John starts laughing and then the entire audience starts laughing as well. You don&apos;t bring the mic to the bathroom to avoid experiences just like this! If you&apos;ve never spoken before, lesson #1 is NEVER bring a mic into a restroom (which I didn&apos;t). It&apos;s just gross and you never know if it might be on as well....which would be a bit awkward. To continue the story, while I was in the restroom Ward apparently decided to turn on the mic and started singing a song - he&apos;s not real shy and actually has a really good singing voice. John was talking at the time so people were hearing about Angular while Ward was providing some background music. Except....people could only assume I was singing since I had the mic and left the room. John commented that Ward must be in the bathroom with me. It was super funny from what I heard. John said he was laughing so hard that tears were rolling down his face and the audience got a good laugh out of it too. I of course was oblivious to everything until I walked back in the room. Ward always keeps me laughing - such a great friend who I enjoy hanging out with any time. But, I&apos;ll never trust him with a mic ever again. :-) There&apos;s a picture of Ward below in one of his glowing outfits.

[![](/images/blog/containerizing-angular-with-docker-my-ng-conf-talks-and-overall-experience/2018-04-16-20.32.54-1024x768.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/04/2018-04-16-20.32.54.jpg)

Ward Bell, John Papa, and myself also had the opportunity to speak at a Utah developer meetup on Tuesday night - what a great group!

[![](/images/blog/containerizing-angular-with-docker-my-ng-conf-talks-and-overall-experience/2018-04-16-20.29.48-1024x768.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/04/2018-04-16-20.29.48.jpg)

In addition to the workshop and meetup I gave 2 talks at ng-conf as well. The first was a quick &quot;warm up&quot; talk on using containers with Angular on the main stage of the conference (day 1) that led into my second (50+ minute) talk on different ways containers can be used with Angular and other technologies. You can watch the talks below as well as all of the other talks from ng-conf! Check out the [Day 1 talk](https://www.youtube.com/watch?v=P1-HAN1g4_4) by my friends [Eric](https://twitter.com/ericsimons40) and [Albert](https://twitter.com/iamalbertpai) at StackBlitz too. They were right after my Day 1 talk, did a great job, and had a super cool demo that involved 1400+ people in the audience. Here are the talks I gave:

## Angular, Docker and Containers….in 5-ish Minutes

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/1jzztM7qJRY&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

 

## “Containerizing” Angular with Docker

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/cLT7eUWKZpg&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;/iframe&gt;

My overall summary for ng-conf? It&apos;s seriously awesome! Kudos to [Joe Eames](https://twitter.com/josepheames), [Aaron Frost](https://twitter.com/aaronfrost), [Kip Lawrence](https://twitter.com/mightykip), and [Sunny Leggett](https://twitter.com/Sunny4days) for putting together another great conference. Thanks to everyone on the Angular team for their time and effort working on the framework and for sharing their knowledge in their talks. Finally, thanks to everyone that came and took the time to introduce yourself to me. I really enjoyed meeting everyone!

[![](/images/blog/containerizing-angular-with-docker-my-ng-conf-talks-and-overall-experience/2018-04-16-20.32.23-1024x768.webp)](https://blog.codewithdan.com/wp-content/uploads/2018/04/2018-04-16-20.32.23.jpg)</content:encoded></item><item><title>Channel 9 Video: Five Things About Docker</title><link>https://blog.codewithdan.com/channel-9-video-five-things-about-docker/</link><guid isPermaLink="true">https://blog.codewithdan.com/channel-9-video-five-things-about-docker/</guid><pubDate>Wed, 14 Feb 2018 00:00:00 GMT</pubDate><content:encoded>I had the opportunity to sit down with [Simona Cotin](https://twitter.com/simona_cotin) (Azure Developer Advocate at Microsoft) at the [AngularMix conference](https://angularmix.com) to talk about &quot;Five Things About Docker&quot;. The video is one in a series of &quot;Five Things&quot; videos being created and produced by the Azure Developer Advocate team at Microsoft and presents various tech topics in a fun way. This particular video is focused on Docker and a few of the things I really like about it.

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/tjzcilHe9iY&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;/iframe&gt;</content:encoded></item><item><title>Tech 5 Tutorial: Getting Started with ASP.NET Core on Mac and Windows</title><link>https://blog.codewithdan.com/tech-5-tutorial-getting-started-with-asp-net-core-on-mac-and-windows/</link><guid isPermaLink="true">https://blog.codewithdan.com/tech-5-tutorial-getting-started-with-asp-net-core-on-mac-and-windows/</guid><pubDate>Sat, 02 Dec 2017 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/tech-5-tutorial-getting-started-with-asp-net-core-on-mac-and-windows/2017-10-27_20-03-24.webp)

In this Tech 5 tutorial, I&apos;ll walk you through getting started with ASP.NET Core on Mac and Windows.

## Getting Started with ASP.NET Core

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/7oFH1nw2LRg&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;span data-mce-type=&quot;bookmark&quot; style=&quot;display: inline-block; width: 0px; overflow: hidden; line-height: 0;&quot; class=&quot;mce_SELRES_start&quot;&gt;﻿&lt;/span&gt;&lt;/iframe&gt;

### View All Tech 5 Videos

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/videoseries?list=PLSPP1ouAVw7U0EbLWcclAm8mNhsF3pLPV&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>Choosing the &quot;Right&quot; JavaScript Library/Framework for Your Application</title><link>https://blog.codewithdan.com/choosing-the-right-javascript-library-framework-for-your-application/</link><guid isPermaLink="true">https://blog.codewithdan.com/choosing-the-right-javascript-library-framework-for-your-application/</guid><pubDate>Fri, 24 Nov 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/choosing-the-right-javascript-library-framework-for-your-application/RightTool.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/11/RightTool.jpg)&quot;What&apos;s the &apos;right&apos; JavaScript library/framework for us to use?&quot;. That&apos;s a question that comes up a lot nowadays given the multitude of choices available and one that doesn&apos;t have a &quot;right&quot; answer of course. I&apos;m fond of saying, &quot;Use the right tool for the right job&quot; when I&apos;m onsite at a company [teaching a training class](https://www.codewithdan.com/products/productType/training) or providing architecture/consulting services. While I certainly have my technology preferences, to force them on someone or on one of the companies I work with would quite honestly be naive and shortsighted. If there&apos;s one thing I&apos;ve learned working in technology over 20 years now, it&apos;s that &quot;one size fits all&quot; is **never** a valid view in technology (or life in general). The world&apos;s way too diverse for &quot;one size fits all&quot;. We all like to think that our way is the &quot;right way&quot; (I include myself in that statement), but that view is very subjective and limited to what we like and don&apos;t like, what we know and are comfortable using, our personal experiences, the types of applications we&apos;re building, as well as many other factors.

When it comes to JavaScript libraries/frameworks, it&apos;s no secret that I&apos;m a [fan of Angular](https://blog.codewithdan.com/2017/08/26/5-key-benefits-of-angular-and-typescript/) and have been for many years going back to the AngularJS days. I&apos;ve seen both versions used very successfully in small and large companies. Having said that this post won&apos;t focus on Angular at all. In fact, I like other options as well ([Vue.js](https://vuejs.org/) is one that I really like for example) and use them in my company&apos;s and our clients&apos; applications as appropriate. I don&apos;t believe in &quot;one size fits all&quot; as mentioned earlier and instead always try to focus on the &quot;right tool for the right job&quot;, or you might say &quot;right tool for the right app&quot;.

For example, about a year ago we needed to build a small data-centric app that dynamically rendered controls based on hierarchical no-SQL data and sent the edited data back to the server for processing. We started out using a framework and soon realized it was overkill for the business problem we were trying to solve. We needed a JavaScript data binding library to get the job done - nothing more. While we could&apos;ve gone with React, Angular, Vue.js or another similar option, we ended up going with [Knockout.js](http://knockoutjs.com/) (something we were very familiar with) because it provided the exact functionality we needed. That gets back to my &quot;right tool for the right job&quot; comment earlier. We&apos;re currently working with several companies that are building large-scale applications that have a lot of features. Something like Knockout.js probably _isn&apos;t_ the right tool for those types of applications because they need data binding plus several additional features that other libraries/frameworks can provide out-of-the-box.

Should you pick Angular, Vue.js, React or another library/framework? Many teams feel overwhelmed by the sheer number of choices out there and are afraid of making the &quot;wrong choice&quot;. Here are my general thoughts on making a choice. I won&apos;t be recommending a specific library/framework but instead walking through some key questions I think you should ask during the selection process.

## Do Our Users Care?

[![](/images/blog/choosing-the-right-javascript-library-framework-for-your-application/man-on-mac.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/11/man-on-mac.jpg)When a debate comes up about a library/framework I always like to say, &quot;My mom doesn&apos;t care what you use (and neither do your users)!&quot;. That could be said about the majority of clients using any application you or I have ever written. They want something that works, solves a business problem, and is quick and easy to use. Very few application users poke around and ask, &quot;Hmmm...I wonder what library/framework they&apos;re using?&quot;. When making a decision about a library/framework remember that the application you&apos;re writing is supposed to be solving a business problem for customers - not for you (in most cases anyway). Eliminate personal biases about libraries/frameworks and you&apos;ll ultimately make a better decision in the long-run. We tend to gravitate to concepts we already know and feel comfortable using. It&apos;s important that we&apos;re willing to step outside of our &quot;comfort bounds&quot; though when making a decision. Every single library/framework I&apos;ve worked with over the years has pros and cons. We tend to ignore the cons for libraries/frameworks we&apos;re comfortable using whether we acknowledge it or not.

As developers, we tend to get caught up in our own little world and get into time-wasting battles over &quot;who is right?&quot;. We often forget that as long as the application does what it&apos;s supposed to do, our clients will likely be happy using it. If we boil our job down, isn&apos;t adding business value and keeping users happy what the job is all about? As long as a given library/framework can be used to build a successful application, then the library/framework you choose really doesn&apos;t matter assuming it meets your business, performance, and maintenance goals. It just doesn&apos;t matter - my mom and your clients don&apos;t care. Of course, there&apos;s more to the app development story aside from keeping users happy and adding business value.

## What Will Make You the Most Productive?

[![](/images/blog/choosing-the-right-javascript-library-framework-for-your-application/gears.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/11/gears.jpg)What language, library, and/or framework will make your team the most productive? That&apos;s the next question I like to address. If it&apos;s JavaScript (since that&apos;s the focus of this post), are your team members proficient in ES5, ES2015, TypeScript, CoffeeScript, Elm, or something else? Do they already work with frameworks or have more of a scripting background?

I&apos;m not a fan of jumping into a &quot;popular&quot; library/framework without the pre-requisite skills to be productive using it. Doing that ultimately leads to more problems than solutions in my experience no matter how great a library/framework may be. I&apos;ve seen it happen time after time where a manager thinks they can send a developer to a class and that they&apos;ll come back knowing everything they need to know.

Not knowing the key aspects of a library/framework can lead to applications being built that aren&apos;t based on best practices and riddled with maintenance issues as a result (more on that in a moment). While a team can certainly be trained on a new language or library/framework, it takes time for them to become efficient and productive using it. Pick a small prototype application to build if you want to prove out a library/framework. Once the prototype app is created have a team discussion about how productive everyone felt they were, pros and cons, and general opinions from team members. I don&apos;t care if it&apos;s Angular, Vue.js, React or something else, start with something small if you&apos;re in the process of choosing a library/framework. Take the time to do a proof of concept.

## What&apos;s the Maintenance Story?

[![](/images/blog/choosing-the-right-javascript-library-framework-for-your-application/maintenance.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/11/maintenance.jpg)I&apos;ve done a lot of production support/maintenance on applications over my career and realized early on how important it is to build applications that are easy to maintain. Change is inevitable in the world of technology (yes - I&apos;m stating the obvious here) so going with a library/framework that your team feels comfortable maintaining is important. This includes evaluating how easy it&apos;ll be to hire new people that can hit the ground running with the chosen library/framework, taking into account contractor work (if your company uses contractors) and more.

A few questions to ask related to maintenance:

1. Are the developers and/or production support teams used to working with a compiler or a scripted language? Often times it&apos;s not as simple as choosing a library/framework - you need to choose the language as well. That might seem obvious (JavaScript), but there are other options to consider. Being careful to choose the underlying language that will be used along with the library/framework is important. Developers used to a compiler may like something like TypeScript for example, whereas JavaScript developers with no experience using a compiler may feel more productive and comfortable using ES5 or ES2015.
2. Does your team write unit tests, end-to-end tests, etc.? Does the library/framework provide good support for that?
3. What is the deployment process like for the library/framework? Is it as simple as moving a few files or is there a build process involved?
4. Does the library/framework provide a way to organize code and features?
5. Does the library/framework provide a widely accepted style guide or list of best practices that developers on a team can follow to ease maintenance down the road?

The maintenance story is one of the most important factors to me personally when choosing a library/framework.

## What&apos;s the Longevity of the Library/Framework?

[![](/images/blog/choosing-the-right-javascript-library-framework-for-your-application/time.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/11/time.jpg)Before making a decision on any library/framework I recommend spending time looking at the source code repository. Here are a few questions to ask:

1. When was the last time the library/framework was updated? Is it stale or actively moving forward?
2. How does the library/framework team handle versioning and does it fit into how your team/company works?
3. How robust is the general open source community for the library/framework (this is a key question I always ask before jumping into a library/framework)?
4. How quickly are issues resolved? On a side note, don&apos;t judge a library/framework by the total number of unresolved issues. Some people tend to use the &quot;Issues&quot; area of a repository to post questions and make feature suggestions which aren&apos;t issues. I like to look at how often issues are being resolved to get a sense of the health of a given library/framework.
5. How many contributors does the library/framework have?
6. Is the library/framework supported by a full-time team or run by an open-source community. There are pros and cons to both of these.

I wish I had a dollar for every time I&apos;ve been asked, &quot;How long do you think library/framework X will be around?&quot;. It&apos;s a great question and something we all worry about. Some companies don&apos;t have the luxury of constantly updating their applications which is why teams are scared of picking a library/framework that may disappear one day. If only I had a crystal ball to help predict the future. :-)

JavaScript projects move fast and tend to have a lot of churn. I&apos;d recommend picking a library/framework that has been stable for at least a year, has a robust community behind it, and that updates frequently.

## Choose a Library or a Framework?

Are you looking for specific library functionality (such as rendering the UI and/or data binding) or do you want a full-featured framework that has a lot of functionality included out-of-the-box? Libraries typically target a few very specific features whereas frameworks cover a brand range of features.

If your team is already using a framework (on the server-side for example), then moving to a JavaScript framework may make sense to keep things as consistent as possible between the client and server. On the other hand, if you prefer to put together different libraries (similar to choosing what you want to eat at a buffet) so that you have the flexibility to swap out different features as needed, then one or more libraries may be what you&apos;re after. As with everything, there are pros and cons to both approaches.[![](/images/blog/choosing-the-right-javascript-library-framework-for-your-application/blueprint.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/11/blueprint.jpg)

I was initially attracted to AngularJS (and now Angular) because of the framework functionality they provide. I have a Java and .NET background and have released many successful web apps over the years using frameworks. I like the consistency that frameworks typically bring to the table for developers on a team. Features such as UI rendering, data binding, routing, form validation, testing, and much more are available out-of-the-box in frameworks like Angular.

Libraries like React and others can provide a lot of functionality without the overhead of a &quot;framework&quot;. They can make it quicker and easier to get started (a very subjective statement I realize) and are generally more lightweight depending on the functionality your application needs. So which is better - a library or a framework? Talk to 100 developers and you&apos;ll get 100 different answers. Here&apos;s my view of some of the popular libraries and frameworks out there. These certainly aren&apos;t the only options, but they&apos;re the big players as of today (in my opinion anyway). Here are 3 that I&apos;ve personally looked into, worked with directly, or seen used successfully at companies I work with.

**Vue.js** - [Vue.js](https://vuejs.org/) is a &quot;progressive JavaScript framework&quot; (although I&apos;ve always thought of it as a library). With additional scripts, you can build both large and small apps using Vue. In addition to being lightweight, it&apos;s also very fast and is really easy to get started using. If you&apos;re familiar with AngularJS (the 1.x version) you&apos;ll pick up on Vue very quickly. It&apos;s an open source project that is growing rapidly. It has a CLI to help get started with your first project: **npm install -g vue-cli**

**React** - [React](https://reactjs.org/) is a UI library that has many additional features (and 3rd party libraries) that can be added. It provides great performance, is easy to get started using, and is quite popular. A full-time team at Facebook as well as a robust open source community help run the project. A smaller variant of React called [Preact](https://preactjs.com/) is also available. It&apos;s used by Facebook which is a bonus when it comes to longevity. React provides a CLI to help get started: **npm install -g create-react-app**

**Angular** - If you prefer a framework then try out [Angular](https://angular.io/). It provides a robust set of features out of the box that are all integrated. It also provides Ahead-of-Time (AOT) compilation for production builds and has a robust CLI. It&apos;s run by a full-time team at Google and has a robust open source community as well. It&apos;s used by a lot of key apps inside of Google which is a bonus when it comes to longevity.  Note that if you&apos;re new to it, &quot;AngularJS&quot; refers to the 1.x version while &quot;Angular&quot; refers to the 2+ version. Get started using the CLI with the following command: **npm install -g @angular/cli**

There are certainly several more libraries/frameworks that could be listed and the list will definitely change over time. I made a decision to only list ones that I&apos;ve had direct experience with either through development or working with a company.

## Are You Targeting Mobile?

If your apps will be run on mobile devices (web or &quot;native&quot;), how well does the library/framework you&apos;re looking at support mobile development? Do you have to build all of the mobile controls that are touch-optimized by hand? These and many additional questions can be asked as you&apos;re choosing a library/framework.

## What 3rd Party Options are Available?

[![](/images/blog/choosing-the-right-javascript-library-framework-for-your-application/light-bulb.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/11/light-bulb.jpg)Another factor to consider is the 3rd party options that are available for the library/framework you&apos;re considering. Do you really want to build that date picker or calendar from scratch (having done that (once), I&apos;d argue &quot;NO!&quot;). Having a robust set of 3rd party functionality that you can include in a library/framework is important especially when it comes to productivity.

## Do You Really Need a Library/Framework?

Some people will argue that &quot;vanilla&quot; JavaScript is the way to go for JavaScript-centric applications so I wanted to mention that option. While I completely disagree with the &quot;vanilla&quot; JavaScript approach (especially for larger enterprise apps) for a variety of reasons, it&apos;s certainly a valid option to consider depending on the type of app you&apos;re building.

Why don&apos;t I like this approach? If the app being built is fairly small then the &quot;vanilla&quot; JavaScript approach may work fine. However, for more robust applications, everything has to be built from scratch which means reinventing the wheel for routing, data binding, form validation, history, and so on and so forth. It means knowing all of the browser quirks, security standards, new and upcoming technologies, and much more. Who has the time to consistently stay on top of all of that while also building business applications? What happens if the key person/people that built the &quot;vanilla&quot; JavaScript code base decide to leave for another job? Now you&apos;re not only worried about maintaining business applications, but also the custom JavaScript utilities or framework that someone built internally.

## Conclusion

Libraries/frameworks get very personal and it&apos;s important to take our personal biases out of the picture when making a library/framework decision. I&apos;ve tried to steer clear of diving into pros and cons for specific libraries/frameworks (aside from mentioning a few above) since I feel strongly that each person/team needs to experiment on their own before making a decision. Talk to someone you trust who has used a library/framework you&apos;re considering to get their feedback. Build a simple prototype and see how you feel afterward. Find answers to some of the questions mentioned in this post.

There are many additional questions and guidelines that could be listed to help in choosing the &quot;right&quot; library/framework for your team. All libraries/frameworks have their own set of pros and cons and all of them can be used to build small, medium, and large applications. Is one better than the other or the &quot;right&quot; one to choose? There&apos;s no way to answer that since it&apos;s very subjective. [Here&apos;s a post](https://www.sitepen.com/blog/2017/11/10/web-frameworks-conclusions/) that mentions JavaScript library/framework strengths and weaknesses and while it&apos;s very opinionated, it provides a starting point. Many other posts comparing libraries/frameworks are out there as well, just keep in mind that each one is typically quite subjective.

I hope some of the guidelines and concepts listed here will help the decision process for you and your team.</content:encoded></item><item><title>Angular Playground - Developing and Running Components in a Sandbox!</title><link>https://blog.codewithdan.com/angular-playground-developing-and-running-components-in-a-sandbox/</link><guid isPermaLink="true">https://blog.codewithdan.com/angular-playground-developing-and-running-components-in-a-sandbox/</guid><pubDate>Tue, 21 Nov 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/angular-playground-developing-and-running-components-in-a-sandbox/Angular-Playground.webp)](http://www.angularplayground.it/)

Scenario-Driven Development (SDD) - a term I&apos;d heard little about until my friend [Justin Couto](https://twitter.com/JustinCouto) encouraged me to check out his team&apos;s [Angular Playground](http://www.angularplayground.it/) tool. SDD didn&apos;t mean much to me when I first heard about it, but I decided to look into Angular Playground more and had one of those &quot;light bulb&quot; moments after I got it up and running. SDD was extremely cool....once I understood what it was all about!

Angular Playground is a relatively new tool from the [SoCreate](https://www.socreate.it) team (and former team member [Justin Schwartzenberger](https://twitter.com/schwarty)) that allows you to build and run Angular components in isolation (note that you can also use it to develop pipes and directives in isolation too). What exactly does that mean? Normally when you create a component you need to fire up the entire application to try it out. While that&apos;s not a huge deal for small apps, it can be painful for larger apps especially if you have to drill-down into the app to get to the component you&apos;re building. If you&apos;re creating a child component you may have to wait until the parent component (which may be responsible for passing data to the child component) is ready to go. Sure, you can always fake the data in the child, but how do you run the child if there&apos;s no parent component in the app yet? You either mock the parent component or create an isolated view that only uses the child component. Either approach means adding code that you&apos;ll eventually end up throwing out. You may also be developing a component that will get data from a service that doesn&apos;t exist quite yet. With the playground, it&apos;s easy to do that too.

With [Angular Playground](http://www.angularplayground.it/) you can develop and run a component in complete isolation - in its own sandbox. Each component in an app can be completely isolated from other components and you can apply different &quot;scenarios&quot; to a given component. For example, let&apos;s say you have a customer details component in your application. Here are a few examples of scenarios that might be added to the component&apos;s sandbox:

| **Scenario 1:** | Display the component with a lot of data |
| --- | --- |
| **Scenario 2:** | Display the component with a subset of data |
| **Scenario 3:** | Display the component without any data |
| **Scenario 4:** | Display the component with different component styles applied (could be several different scenarios for this one) |
| **Scenario 5:** | Display the component with different CSS classes applied to the component selector |
| **Scenario 6:** | Many more scenarios could potentially be added as needed... |

Normally doing these types of things can be a fair amount of work, but with Angular Playground, you can create a sandbox for the component and then define multiple scenarios. As you build the component, you can quickly flip through the different scenarios to see how the component looks and debug it as well since it&apos;s running directly in the browser. To clarify, this isn&apos;t end-to-end testing or really any type of testing per se. It&apos;s isolating the component into its own sandbox while you develop it to make it quicker and easier to see the component in different scenarios (such as different types of data being passed), have different look/feel scenarios if you haven&apos;t finalized the look/feel yet, experiment with different quantities of data for responsive design work, and much more.

Here&apos;s an example of running different components using Angular Playground each with their own sandbox and scenarios:

[![](/images/blog/angular-playground-developing-and-running-components-in-a-sandbox/Angular-Playground-Demo.gif)](https://blog.codewithdan.com/wp-content/uploads/2017/11/Angular-Playground-Demo.gif)

## Getting Started with Angular Playground

**UPDATE:** Angular Playground can now be installed quickly and easily using the Angular CLI 6+! One simple command and you&apos;re ready to go:

**ng add angular-playground**

See [http://www.angularplayground.it/docs/getting-started/angular-cli](_wp_link_placeholder) for details.

If you&apos;re not on Angular 6+, here&apos;s a quick walk-through on how you can get the playground going.

To get started with Angular Playground you&apos;ll want to walk-through the [help docs](http://www.angularplayground.it/docs/getting-started/angular-cli) available on their website. I use the Angular CLI so I went through those docs and performed the following tasks:

1. Installed the Playground CLI:
    
    ```
    npm i angular-playground --save-dev
    ```
    
2. Added a **main.playground.ts** file to bootstrap the playground environment
3. Modified **.angular-cli**.json****
4. Added an **angular-playground.json** file
5. Modified the **tsconfig.app.json** file
6. Add the following to the **scripts** section of **package.json**:
    
    ```
    &quot;playground&quot;: &quot;angular-playground&quot;
    ```
    

It took me around 5 minutes or so to make the modifications - a really simple process that they walk you through [step-by-step](http://www.angularplayground.it/docs/getting-started/angular-cli).

## Creating Component Sandboxes

Once you have the Angular Playground CLI tool in place you can run **npm run playground** to start the sandbox process. This will create a **sandboxes.ts** file in your **src** folder and start **ng serve**.

The next task you&apos;ll perform is to create your first sandbox. I started with a very basic component named [AboutComponent](https://github.com/DanWahlin/Angular-JumpStart/blob/master/src/app/about/about.component.ts) since it displays static data. I added an **about.component.sandbox.ts** file in the **about** feature folder at the same level as the component file (you can certainly put sandbox files in a subfolder if desired as well):

```

import { sandboxOf } from &apos;angular-playground&apos;;
import { AboutComponent } from &apos;./about.component&apos;;

export default sandboxOf(AboutComponent)
  .add(&apos;About Component&apos;, {
    template: `&lt;cm-about&gt;&lt;/cm-about&gt;`
  });
```

This creates a single scenario that runs the component. To try it out I went to **http://localhost:4201** in the browser (recall that I ran **npm run playground** earlier to start the playground process) and pressed **ctrl + p**. I then typed the name of the scenario (the letter &quot;A&quot; will do here) in the command bar search box to select and run the **about** component. This particular scenario isn&apos;t very interesting given the static data, but it made it easy to check if I had everything configured correctly for the playground to run.

Next, I decided to create multiple scenarios for a [customer details component](https://github.com/DanWahlin/Angular-JumpStart/blob/master/src/app/customer/customer-details.component.ts) that I had in one of the sample apps we use in our [Angular Development](https://www.codewithdan.com/products/angular-programming) training classes called [Angular JumpStart](https://github.com/DanWahlin/Angular-JumpStart). The customer details component is more involved due to ActivatedRoute (for accessing route parameter data) and [DataService](https://github.com/DanWahlin/Angular-JumpStart/blob/master/src/app/core/services/data.service.ts) being injected into the constructor. I created a few [mock classes, functions and variables](https://github.com/DanWahlin/Angular-JumpStart/blob/master/src/app/shared/mocks.ts) for routing and for the data service and was off and running fairly quickly.

Here are the two scenarios added for the customer details component. The first provides data to the component and the second runs the component without any data to see how it responds.

```

import { sandboxOf } from &apos;angular-playground&apos;;
import { SharedModule } from &apos;../shared/shared.module&apos;;
import { CoreModule } from &apos;../core/core.module&apos;;
import { DataService } from &apos;../core/services/data.service&apos;;
import { CustomerDetailsComponent } from &apos;./customer-details.component&apos;;
import { MockDataService, MockActivatedRoute, getActivatedRouteWithParent } from &apos;../shared/mocks&apos;;
import { ActivatedRoute } from &apos;@angular/router&apos;;

const sandboxConfig = {
  imports: [ SharedModule, CoreModule ],
  providers: [
      { provide: DataService, useClass: MockDataService },
      { provide: ActivatedRoute, useFactory: () =&gt; { 
        let route = getActivatedRouteWithParent([{ id: &apos;1&apos; }]);  
        return route;
      }}
  ],
  label: &apos;Customer Details Component&apos;
};

export default sandboxOf(CustomerDetailsComponent, sandboxConfig)
  .add(&apos;With a Customer&apos;, {
    template: `&lt;cm-customer-details&gt;&lt;/cm-customer-details&gt;`    
  })
  .add(&apos;Without a Customer&apos;, {
    template: `&lt;cm-customer-details&gt;&lt;/cm-customer-details&gt;`,
    providers: [
      { provide: ActivatedRoute, useFactory: () =&gt; { 
        let route = getActivatedRouteWithParent([{ id: null }]);  
        return route;
      }}
    ]   
  });
```

After getting the scenarios in place I pressed **ctrl + p** again, typed &quot;C&quot; into the command bar that appears, selected the correct scenario for the customer details component, and was able to see the component live in its own sandbox. As a quick side note, when the scenario command bar displays you can use the **up** and **down** arrows to move around (see [these docs](http://www.angularplayground.it/how-to/command-bar-open) for details on navigating between scenarios).

At this point, I thought to myself, &quot;this is very cool&quot;! No need to load the entire app to try out the component while developing it. I could even build and run a child component (which the customer details component is actually) in a completely isolated way without the parent component being present. As I made changes to the component and saved the file, the current scenario automatically refreshed the browser.

You can find additional sandbox examples from the [Angular JumpStart](https://github.com/DanWahlin/Angular-JumpStart) app below:

- [CustomerOrdersComponent](https://github.com/DanWahlin/Angular-JumpStart/blob/master/src/app/customer/customer-orders.component.sandbox.ts)
- [CustomersCardComponent](https://github.com/DanWahlin/Angular-JumpStart/blob/master/src/app/customers/customers-card.component.sandbox.ts)
- [CustomersGridComponent](https://github.com/DanWahlin/Angular-JumpStart/blob/master/src/app/customers/customers-grid.component.sandbox.ts)
- [CustomersComponent](https://github.com/DanWahlin/Angular-JumpStart/blob/master/src/app/customers/customers.component.sandbox.ts)

## Conclusion

Although this post only scratches the surface of [Angular Playground](http://www.angularplayground.it/), I hope it gives you an idea of what&apos;s possible. Scenario-Driven Development is a great way to build and run your components in their own sandbox with multiple scenarios. I can definitely see how using the playground can significantly speed up development time and increase overall productivity.

Check out the [Angular Playground](http://www.angularplayground.it/) site for more information or give the [Angular JumpStart](https://github.com/DanWahlin/Angular-JumpStart) app a try and see the playground in action. The readme file has all of the details on how to run the app and the playground.</content:encoded></item><item><title>Node.js/Express Convention-Based Routes</title><link>https://blog.codewithdan.com/node-js-express-convention-based-routes/</link><guid isPermaLink="true">https://blog.codewithdan.com/node-js-express-convention-based-routes/</guid><pubDate>Thu, 16 Nov 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/node-js-express-convention-based-routes/2017-10-27_20-17-22.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/11/2017-10-27_20-17-22.png)I&apos;ve always been a fan of convention-based routing so I converted a local route generation script I&apos;ve been using with Node.js/Express applications into an npm package called [express-convention-routes](https://www.npmjs.com/package/express-convention-routes). The package can be used to automate the creation of Express routes based on a directory structure.

What&apos;s a convention-based Express route? It&apos;s a route that is dynamically generated and associated with a &quot;controller&quot; function without having to explicitly code the route yourself (i.e. you don&apos;t write code such as app.use(&apos;/foo&apos;, router)). express-convention-routes creates routes automatically by parsing a convention-based folder structure such as the one below when the server first starts.

```
-controllers
    -customers
        -customers.controller.js
    -api
        -cart
            -cart.controller.js
    -index.controller.js

```

This allows application routes to be created without having to write any app.use() code to define the individual routes. Using the previous folder structure, express-convention-routes would create the following routes (and associate them with the appropriate &quot;controller&quot; functions):

```
/customers
/api/cart   
/

```

Each folder contains a &quot;controller&quot; file that defines the functionality to run for the given route. For example, if you want a **root** route you&apos;d add a file into the root **controllers** folder (**index.controller.js** for example). If you want an **api/cart** route you&apos;d create that folder structure under the controllers folder (see the folder example above) and add a &quot;controller&quot; file such as **cart.controller.js** into the **api/cart** folder. You can name the controller files anything you&apos;d like and they can have as many HTTP actions (GET/POST/PUT/DELETE, etc.) in them as you want.

To get started using the npm package, perform the following steps:

1. 1. Install the **express-convention-routes** package locally: `npm install express-convention-routes --save`
    2. Create a **controllers** folder at the root of your Express project.
    3. To create a root (/) route, add an **index.controller.js** file into the folder (you can name the file whatever you&apos;d like). Put the following code into the file:
        
        ```
        
          module.exports = function (router) {
            router.get(&apos;/&apos;, function (req, res) {
                res.end(&apos;Hello from root route!&apos;);
            });
          };
        ```
        
    4. To create a **/customers** route, create a subfolder under controllers named **customers**.
    5. Add a **customers.controller.js** file into the **customers** folder (you can name the file anything you&apos;d like):
        
        ```
        
          module.exports = function (router) {
            router.get(&apos;/&apos;, function (req, res) {
                res.end(&apos;Hello from the /customers route!&apos;);
            });
          };
        ```
        
    6. Once the routing folder structure is created, add the following code into your express server code (index.js, server.js, etc.) to load the routes automatically based on the folder structure in the &quot;controllers&quot; folder when the Express server starts:
        
        ```
        
          var express = require(&apos;express&apos;),
              app = express(),
              router = require(&apos;express-convention-routes&apos;);
        
          router.load(app, {
            //Defaults to &quot;./controllers&quot; but showing for example
            routesDirectory: &apos;./controllers&apos;, 
        
            //Root directory where your server is running
            rootDirectory: __dirname,
            
            //Do you want the created routes to be shown in the console?
            logRoutes: true
          });
        ```
        
    7. Try out the included sample app by running the following commands:
        - `npm install`
        - `npm start`
    8. The sample app included with the package (see the [Github project](https://github.com/DanWahlin/express-convention-routes)) follows a feature-based approach where the controller and associated view are in the same folder (handlebars is used for the views in the sample). If you prefer the more traditional approach where all of the views live in the &quot;views&quot; folder you can simply move the .hbs files there into the proper folders.

I originally got the convention-based routes idea from ASP.NET MVC (as well as other MVC frameworks) and KrakenJS ([http://krakenjs.com](http://krakenjs.com)). These frameworks automate the process of creating routes so I wanted to do something similar with [express-convention-routes](https://www.npmjs.com/package/express-convention-routes) while keeping the dependencies as minimal as possible and the code as simple as possible.</content:encoded></item><item><title>Pushing Real-Time Data to an Angular Service using Web Sockets</title><link>https://blog.codewithdan.com/pushing-real-time-data-to-an-angular-service-using-web-sockets/</link><guid isPermaLink="true">https://blog.codewithdan.com/pushing-real-time-data-to-an-angular-service-using-web-sockets/</guid><pubDate>Tue, 07 Nov 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/pushing-real-time-data-to-an-angular-service-using-web-sockets/angular.svg)](https://blog.codewithdan.com/wp-content/uploads/2016/09/angular.svg)One of the questions I&apos;ve been asked a lot lately in my [Angular training classes](https://www.codewithdan.com/products/angular-programming), at [conference workshops](https://devintersection.com/#!/workshops), and when working with different companies has been, &quot;How can I push data to an Angular application from the server?&quot;. Pushing data from the server to the client is useful when applications need to display real-time data or when they want to leverage the speed and low-latency benefits provided by TCP/IP Web Socket connections.

My answer to this question (up to this point anyway) has been to provide a high-level discussion of how Web Sockets can be used to push data to an Angular service. The general flow looks something like this:

1. Add Web Socket functionality on the server
2. Create an Angular service that subscribes to the data stream provided by the server
3. Return an observable from the Angular service that a component can subscribe to
4. Emit data received in the Angular service (from the server) to observable subscribers
5. Subscribe to the service observable in a component

While it&apos;s fairly easy to follow the flow described above, having an actual example makes it much easier to demonstrate the overall concept. As a result, I decided to create a simple [Angular/Web Socket proof of concept project](https://github.com/DanWahlin/Angular-WebSockets) that I&apos;ll briefly walkthrough here.

## 1\. Add Web Socket Functionality to the Server

There are a lot of options that can be used to add Web Socket functionality to the server - it really depends upon what language/framework you prefer. For the Angular/Web Socket example project, I went with Node.js and [socket.io](https://socket.io/) since it&apos;s easy to get up and running on any OS. The overall server is extremely easy to get up and running (keep in mind that I purposely kept it very basic to demonstrate the overall concept). The server starts a timer (used to simulate data changing on the server) once a client connection is made and returns data to one or more clients as the timer fires.

```

const express = require(&apos;express&apos;),
      app = express(),
      server = require(&apos;http&apos;).createServer(app);
      io = require(&apos;socket.io&apos;)(server);

let timerId = null,
    sockets = new Set();

//This example emits to individual sockets (track by sockets Set above).
//Could also add sockets to a &quot;room&quot; as well using socket.join(&apos;roomId&apos;)
//https://socket.io/docs/server-api/#socket-join-room-callback

app.use(express.static(__dirname + &apos;/dist&apos;)); 

io.on(&apos;connection&apos;, socket =&gt; {

  sockets.add(socket);
  console.log(`Socket ${socket.id} added`);

  if (!timerId) {
    startTimer();
  }

  socket.on(&apos;clientdata&apos;, data =&gt; {
    console.log(data);
  });

  socket.on(&apos;disconnect&apos;, () =&gt; {
    console.log(`Deleting socket: ${socket.id}`);
    sockets.delete(socket);
    console.log(`Remaining sockets: ${sockets.size}`);
  });

});

function startTimer() {
  //Simulate stock data received by the server that needs 
  //to be pushed to clients
  timerId = setInterval(() =&gt; {
    if (!sockets.size) {
      clearInterval(timerId);
      timerId = null;
      console.log(`Timer stopped`);
    }
    let value = ((Math.random() * 50) + 1).toFixed(2);
    //See comment above about using a &quot;room&quot; to emit to an entire
    //group of sockets if appropriate for your scenario
    //This example tracks each socket and emits to each one
    for (const s of sockets) {
      console.log(`Emitting value: ${value}`);
      s.emit(&apos;data&apos;, { data: value });
    }

  }, 2000);
}

server.listen(8080);
console.log(&apos;Visit http://localhost:8080 in your browser&apos;);

```

The key part of the code is found in the **io.on(&apos;connection&apos;, ...)** section. This code handles adding client socket connections into a set, starts the timer when the first socket connection is made and handles removing a given socket from the set when a client disconnects. The **startTimer()** function simulates data changing on the server and handles iterating through sockets and pushing data back to connected clients (note that there are additional techniques that can be used to push data to multiple clients - see the included comments).

The next 3 steps all relate to the Angular service.

## 2\. Create an Angular Service that Subscribes to the Data Stream Provided by the Server

## 3\. Return an Observable from the Angular Service that a Component can Subscribe to

## 4\. Emit data received in the Angular Service (from the service) to Observable subscribers

The Angular service subscribes to the data being pushed from the server using a script provided by socket.io (the script is defined in index.html). The service&apos;s **getQuotes()** function first connects to the server using the **io.connect()** call. It then hooks the returned socket to &quot;data&quot; messages returned from the server. Finally, it returns an **observable** to the caller. The **observable** is created  by calling **Observable.create()** in the **createObservable()** function.

As Web Socket data is received in the Angular service, the observer object created in **createObservable()** is used to pass the data to any Angular subscribers by calling **observer.next(res.data)**. In essence, the Angular service simply forwards any data it receives to subscribers.

```

import { Injectable } from &apos;@angular/core&apos;;
import { Observable } from &apos;rxjs/Observable&apos;;
import { Observer } from &apos;rxjs/Observer&apos;;
import { map, catchError } from &apos;rxjs/operators&apos;;
import * as socketIo from &apos;socket.io-client&apos;;

import { Socket } from &apos;../shared/interfaces&apos;;

declare var io : {
  connect(url: string): Socket;
};

@Injectable()
export class DataService {

  socket: Socket;
  observer: Observer;

  getQuotes() : Observable&lt;number&gt; {
    this.socket = socketIo(&apos;http://localhost:8080&apos;);

    this.socket.on(&apos;data&apos;, (res) =&gt; {
      this.observer.next(res.data);
    });

    return this.createObservable();
  }

  createObservable() : Observable&lt;number&gt; {
      return new Observable(observer =&gt; {
        this.observer = observer;
      });
  }

  private handleError(error) {
    console.error(&apos;server error:&apos;, error);
    if (error.error instanceof Error) {
        let errMessage = error.error.message;
        return Observable.throw(errMessage);
    }
    return Observable.throw(error || &apos;Socket.io server error&apos;);
  }

}
```

## 5\. Subscribe to the Service Observable in a Component

The final step involves a component subscribing to the observable returned from the service&apos;s **getQuotes()** function. In the following code, **DataService** is injected into the component&apos;s constructor and then used in the **ngOnOnit()** function to call **getQuotes()** and subscribe to the observable. Data that streams into the subscription is fed into a **stockQuote** property that is then rendered in the UI.

Note that the subscription object returned from calling **subscribe()** is captured in a **sub** property and used to unsubscribe from the **observable** when **ngOnDestroy()** is called.

```

import { Component, OnInit, OnDestroy } from &apos;@angular/core&apos;;
import { DataService } from &apos;./core/data.service&apos;;
import { Subscription } from &apos;rxjs/Subscription&apos;;

@Component({
  selector: &apos;app-root&apos;,
  templateUrl: &apos;./app.component.html&apos;
})
export class AppComponent implements OnInit, OnDestroy {

  stockQuote: number;
  sub: Subscription;

  constructor(private dataService: DataService) { }

  ngOnInit() {
    this.sub = this.dataService.getQuotes()
        .subscribe(quote =&gt; {
          this.stockQuote = quote;
        });
  }

  ngOnDestroy() {
    this.sub.unsubscribe();
  }
}
```

 

## Conclusion

Although this example is intentionally kept quite simple (there&apos;s much more that could be added), it hopefully provides a nice starting point if you&apos;re interested in streaming data to an Angular service using Web Sockets. The complete project can be found at [https://github.com/DanWahlin/Angular-WebSockets](https://github.com/DanWahlin/Angular-WebSockets).</content:encoded></item><item><title>Tech 5 Tutorial: Getting Started with Docker</title><link>https://blog.codewithdan.com/tech-5-tutorial-getting-started-with-docker/</link><guid isPermaLink="true">https://blog.codewithdan.com/tech-5-tutorial-getting-started-with-docker/</guid><pubDate>Fri, 03 Nov 2017 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/tech-5-tutorial-getting-started-with-docker/2017-10-27_20-03-24.webp)

In this Tech 5 tutorial, I&apos;ll walk you through the core concepts of Docker and how you can get started with Docker Community Edition, images, and containers.

## Getting Started with Docker

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/5W4wjJA4BUQ&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

### View All Tech 5 Videos

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/videoseries?list=PLSPP1ouAVw7U0EbLWcclAm8mNhsF3pLPV&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>Video: Interview from ng-conf on TypeScript, Angular, Docker and More</title><link>https://blog.codewithdan.com/video-interview-from-ng-conf-on-typescript-angular-docker-and-more/</link><guid isPermaLink="true">https://blog.codewithdan.com/video-interview-from-ng-conf-on-typescript-angular-docker-and-more/</guid><pubDate>Sun, 29 Oct 2017 00:00:00 GMT</pubDate><content:encoded>I had the chance to talk with This Dot Media at ng-conf 2017 about TypeScript, Angular, Docker and more. It was a fun discussion that covered a lot of material in 15 minutes. You can view the full interview below.

## This Dot Media Interview on TypeScript, Angular and Docker

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/1Grc2om9574&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

The talks mentioned in the interview can be viewed below:

## Diving into TypeScript

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/4xScMnaasG0&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

## Docker: What Every Developer Should Know

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/socWfhPJptE&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>Tech 5 Tutorial: Getting Started with the Angular CLI</title><link>https://blog.codewithdan.com/tech-5-tutorial-getting-started-with-the-angular-cli/</link><guid isPermaLink="true">https://blog.codewithdan.com/tech-5-tutorial-getting-started-with-the-angular-cli/</guid><pubDate>Fri, 27 Oct 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/tech-5-tutorial-getting-started-with-the-angular-cli/2017-10-27_20-03-24.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/10/2017-10-27_20-03-24.png)

I&apos;m starting a new tutorial series called &quot;Tech 5&quot; (think &quot;Take 5&quot;...but for tech) that features short, focused, approximately 5-minute videos. Here&apos;s the first one which covers getting started with the Angular CLI.

## Getting Started with the Angular CLI

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/u1vqO27MD1I&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>&quot;Containerizing&quot; Angular with Docker</title><link>https://blog.codewithdan.com/containerizing-angular-with-docker/</link><guid isPermaLink="true">https://blog.codewithdan.com/containerizing-angular-with-docker/</guid><pubDate>Sat, 21 Oct 2017 00:00:00 GMT</pubDate><content:encoded>I had the opportunity to speak at the [AngularMix](http://angularmix.com) conference in Orlando about something that I think every developer should learn more about - containers. The talk was titled **&quot;Containerizing&quot; Angular with Docker** and discussed the following concepts as they relate to Angular applications:

- Application deployment over the years
- The need for containers
- How can Docker containers help?
- Creating a custom Docker image
- Moving containers to the cloud

You can get to the content shown in the talk at [http://codewithdan.me/angular-containers](http://codewithdan.me/angular-containers). The code that I demo in the talk can be found at the following links:

- [https://github.com/DanWahlin/Angular-Docker-Microservices](https://github.com/DanWahlin/Angular-Docker-Microservices)
- [https://github.com/DanWahlin/Angular-In120](https://github.com/DanWahlin/Angular-In120)

Check out the talk below.

## &quot;Containerizing&quot; Angular with Docker

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/DccKy8ADFOo&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>Code with Dan Newsletter #3: AI and Web Components, Cosmos DB, CSS Grid, TypeScript Deep Dive</title><link>https://blog.codewithdan.com/code-with-dan-newsletter-3-ai-and-web-components-cosmos-db-css-grid-typescript-deep-dive/</link><guid isPermaLink="true">https://blog.codewithdan.com/code-with-dan-newsletter-3-ai-and-web-components-cosmos-db-css-grid-typescript-deep-dive/</guid><pubDate>Fri, 06 Oct 2017 00:00:00 GMT</pubDate><content:encoded>[Edition #3](http://mailchi.mp/codewithdan/code-with-dan-development-newsletter-1367905?platform=hootsuite) of the Code with Dan Web Development Newsletter is now out including a new video walk-through!

Topics in this edition:

- Artificial Intelligence (AI) and Web Components
- MEAN Stack and Cosmos DB
- CSS Grid
- Deep Dive into TypeScript
- Import Cost VSCode Plugin
- React 16 and the rewrite process
- My new Pluralsight course!

Watch my video walk-through of the content covered in this newsletter here:

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/t_Hp_88uHkw&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

Not subscribed to the newsletter yet? Sign-up below!</content:encoded></item><item><title>New Pluralsight Course: Integrating Angular with ASP.NET Core RESTful Services</title><link>https://blog.codewithdan.com/new-pluralsight-course-integrating-angular-with-asp-net-core-restful-services/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-pluralsight-course-integrating-angular-with-asp-net-core-restful-services/</guid><pubDate>Tue, 05 Sep 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/new-pluralsight-course-integrating-angular-with-asp-net-core-restful-services/integrating-angular-aspnet-core.webp)](https://www.pluralsight.com/courses/angular-aspnetcore-restful-services?utm_medium=affiliate&amp;utm_source=1457843)

I&apos;m excited to announce the release of my new course on [Pluralsight](https://www.pluralsight.com/courses/angular-aspnetcore-restful-services?utm_medium=affiliate&amp;utm_source=1457843) titled [**Integrating Angular with ASP.NET Core RESTful Services**](https://www.pluralsight.com/courses/angular-aspnetcore-restful-services?utm_medium=affiliate&amp;utm_source=1457843)! This course follows up my previous course which focused on [Angular and Node.js](https://www.pluralsight.com/courses/angular-nodejs-restful-services?utm_medium=affiliate&amp;utm_source=1457843). The code in this new class covers ASP.NET Core 2.0 or higher and Angular 4 or higher.

As with my previous course, I&apos;ll walk you through the process of using Angular to call into RESTful services and perform CRUD (Create, Read, Update and Delete) operations in an application to allow a user to view and modify data. However, in this course the services are built using C# and ASP.NET Core rather than JavaScript and Node.js.

If you&apos;ve asked the following questions then this course will provide the information you need:

- What&apos;s involved with creating a RESTful service using C# and ASP.NET Core?
- How do you use ASP.NET Core middleware?
- How do you document RESTful services using Swagger?
- How do Angular services work?
- How should Angular modules be organized?
- How do Observables and RxJS fit in with async operations?
- How do you use Angular&apos;s Http service to make async calls? (note that code showing how to use the new Angular 4.3+ HttpClient is also included in the course project)
- How do you create and validate Angular forms?
- What&apos;s the difference between template-drive and reactive forms?
- How do you work with headers and page data?
- What are CSRF attacks and how can I mitigate them with ASP.NET Core middleware?

Although the course uses ASP.NET Core RESTful services, the Angular concepts covered throughout the course can be used to call any RESTful service regardless of technology (if you&apos;re interested in Angular/Node.js check out my [previous course](https://www.pluralsight.com/courses/angular-nodejs-restful-services?utm_medium=affiliate&amp;utm_source=1457843))!

If you&apos;re a [Pluralsight](https://www.pluralsight.com/courses/angular-aspnetcore-restful-services?utm_medium=affiliate&amp;utm_source=1457843) subscriber I sincerely hope you enjoy the course. It was a lot of fun to build and film! Note that if you&apos;re not a Pluralsight subscriber, you can still watch 3 hours of the course for free with their [10-day trial](https://billing.pluralsight.com/individual/checkout/account-details?sku=IND-Y-PLUS-FT&amp;utm_medium=affiliate&amp;utm_source=1457843).

Here&apos;s a synopsis of the key topics as well as the course modules.

# Integrating Angular with ASP.NET Core RESTful Services

Learn how to build an Angular and ASP.NET Core application that can perform create, read, update and delete (CRUD) operations. Topics covered include building RESTful services with C# and ASP.NET Core, manipulating data in a relational database (Sqlite, SQL Server, PostgreSQL or any relational database supported by Entity Framework Core can also be used) and consuming services with Angular.

## **Key Angular Topics Covered:**

- TypeScript and how it can be used in Angular applications
- Code organization with Angular modules
- The role of ES2015 module loaders in Angular applications
- Promises versus Observables
- Learn how Observables work (a nice visual explanation is shown) and how to subscribe to them
- Learn how to create and use Angular services
- Angular&apos;s Http service and how it can be used to call into RESTful services. The course application also includes code to show how to use the new HttpClient as well if you&apos;re on Angular 4.3+.
- Differences between Template-driven and Reactive forms in Angular
- Directives used in Template-driven forms and how to use them for two-way data binding
- Directives and code used in Reactive forms
- Form validation techniques and custom validators
- How to build custom components and leverage Input and Output properties
- Working with headers sent by the server
- Building a custom pagination component
- CSRF attacks and how Angular can help

## **Key ASP.NET Core Topics Covered:**

- Understand GET, POST, PUT and DELETE and the role each plays with RESTful services
- Use ASP.NET Core middleware
- Create RESTful services capable of supporting CRUD operations using C# and ASP.NET Core
- Use Entity Framework Core for data access
- Paging data
- Working with headers on the server-side and client-side
- Preventing CSRF attacks

## **Course Modules:**

1. **Course Introduction**
    - Pre-requisites to Maximize Learning
    - Learning Goals
    - Server-side Technologies and Concepts
    - Client-side Technologies and Concepts
    - Running the Application on Windows
    - Running the Application on Mac
    - Running the Application with Docker
2. **Exploring the Angular and ASP.NET Core Application**
    - Getting Started - Using a Seed Project
    - Getting Started - Using the dotnet CLI
    - Getting Started - Using the Angular CLI
    - Exploring the Project Structure
    - Application Packages and Modules
    - Configuring the ES Module Loader
    - Angular Modules, Components and Services
3. **Retrieving Data Using a GET Action**
    - Injecting Data Repository and Logging Functionality
    - Creating a GET Action to Return Multiple Customers
    - Creating a GET Action to Return States
    - Creating a GET Action to Return a Single Customer
    - Making GET Requests with an Angular Service
    - Subscribing to an Observable in a Component
    - Displaying Customers in a Grid
    - Displaying a Customer in a Form
    - Converting to a Reactive Form
4. **Inserting Data Using a POST Action**
    - Creating a POST Action to Insert a Customer
    - Making a POST Request with an Angular Service
    - Modifying the Customer Form to Support Inserts
    - Exploring the Reactive Form
    - Modifying the Reactive Form submit() Function
5. **Updating Data Using a PUT Action**
    - Creating a PUT Action to Update a Customer
    - Making a PUT Request with an Angular Service
    - Modifying the Customer Form to Support Updates
    - Exploring the Reactive Form
    - Viewing Swagger Documentation
6. ****Deleting Data Using a DELETE Action****
    - Creating a DELETE Action to Delete a Customer
    - Making a DELETE Request with an Angular Service
    - Modifying the Customer Form to Support Deletes
    - Exploring the Reactive Form
    - Viewing Swagger Documentation
7. ****Data Paging, HTTP Headers and CSRF****
    - Adding a Paging Header to a RESTful Service Response
    - Accessing Headers and Data in an Angular Service
    - Adding Paging Support to a Component
    - Adding a Paging Component
    - CSRF Overview
    - Adding CSRF Functionality
    - Using a CSRF Token in an Angular Service</content:encoded></item><item><title>5 Key Benefits of Angular and TypeScript</title><link>https://blog.codewithdan.com/5-key-benefits-of-angular-and-typescript/</link><guid isPermaLink="true">https://blog.codewithdan.com/5-key-benefits-of-angular-and-typescript/</guid><pubDate>Sat, 26 Aug 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/5-key-benefits-of-angular-and-typescript/5-key-benefits-angular-typescript.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/08/5-key-benefits-angular-typescript.jpg)Over the past few months, I&apos;ve been asked the same general question about Angular multiple times in [onsite training classes](https://www.codewithdan.com/products/angular-programming), while helping customers with their architecture, or when talking with company leaders about the direction web technologies are heading. Here&apos;s the general question:

### **&quot;What are the key benefits that Angular and TypeScript can offer our development teams?&quot;**

It&apos;s a great question and one that should be asked before jumping into any new technology or framework. Most of the people asking are technology managers or directors interested in understanding the benefits that Angular can offer their teams (both technical and non-technical). They&apos;re concerned about application maintenance, developer productivity, handling change requests, the longevity of the framework, the pace of technology, and more.

After hearing that general question over and over I decided it was time to put together a post that outlines my top 5 reasons for using Angular and TypeScript. There are certainly **MANY** others I could list, but here are my top 5 benefits:

1. Consistency
2. Productivity
3. Maintainability
4. Modularity
5. Catch Errors Early

Before continuing I do want to mention that while I do work with Angular a lot (and enjoy it), I’m not a “one framework” type of guy. Why? Because if there’s one thing I’ve learned in life it’s that one size never fits all. When someone suggests that one technology or framework is \*always\* the best option I like to say, “Walk into a shoe store and say that you want to try on a pair of shoes. When they ask you what size you&apos;d like, tell them it doesn’t matter and see how that goes.” One size doesn’t fit all in shoes - or in technology.

My company ([Wahlin Consulting](https://codewithdan.com)) works with a lot of large and small companies and some are using AngularJS 1.x, some have moved to Angular v2+, some are using React, others are exploring Vue.js (and others), and a few are still using jQuery. This post is geared toward those who are considering Angular and TypeScript. If you&apos;re not interested in Angular/TypeScript then this post probably isn&apos;t for you.

Let&apos;s get started with benefit #1.

## Consistency

Code consistency is an important goal to strive for in any code base. If you or your team have to support production applications then you understand how important consistency is and why it leads to better maintenance. So what does Angular offer that provides consistency for teams? The overall framework is based on components and services that you can think of as Lego blocks. All components and services start out the same way. For example, all Angular components do the following:

1. Import required ES2015 modules
2. Define a @Component decorator (metadata)
3. Place code in a component class

Here&apos;s a visual showing how that works:

![](/images/blog/5-key-benefits-of-angular-and-typescript/2017-08-26_13-28-38.webp)

Regardless of what component you&apos;re writing, this overall structure is always followed. Sure, there are additional things you can add (implement an interface such as OnInit or others if using TypeScript, put templates inline versus in a separate file, and many others), but the overall structure of a component always looks the same. That&apos;s a good start and provides consistency as team members start out building components.

Another big area of consistency in Angular is with services. AngularJS 1.x let you choose between factories, services, providers, values and constants when you have code that needs to be reused throughout an application. Some developers prefer factories while others lean toward services. They both do the same thing overall but which one is the &quot;right one&quot;? The answer is that it&apos;s quite subjective. Unless a team agrees on a coding style for team members, each developer goes off and does their own things...something I&apos;ve always called &quot;cowboy coding&quot; (no offense to any cowboys out there :-)).

Fortunately, Angular makes deciding how to add re-usable code into an application quite simple. Everything in that scenario is simply a service class:

```

import { Injectable } from &apos;@angular/core&apos;;
import { MyDependency } from &apos;./mydependency.service&apos;;

@Injectable()
class MyService {
   constructor(private myDependency: MyDependency) {}
}
```

Any dependencies that the service requires can be &quot;injected&quot; into its constructor as shown in the code sample above (the dependency must have a [provider registered](https://angular.io/guide/dependency-injection) for this to work). This is another area where consistency is great. While components, services, and other types can certainly create instances of the objects they need, Angular provides built-in dependency injection that will inject the objects at runtime. This not only provides consistency across an application but also allows injected objects to be overridden if needed which can be useful in many scenarios.

The Angular documentation also provides a [style guide](https://angular.io/guide/styleguide) that teams can use as a starting point to help drive consistency across projects. If I&apos;m a director, manager, team lead or simply in charge of ensuring consistency across a team, I&apos;m investing the necessary time to create a team-specific style guide for any framework used.

Finally, [Angular provides a CLI tool](https://cli.angular.io/) that can be used to create initial projects, add different features into an application (components, services, etc.), run tests, perform builds, lint code, and more. This provides a great foundation for teams to build on to drive consistency across team members and even across multiple teams in an enterprise.

The bottom line is that the consistency found in Angular components, services, pipes, directives and more allows a team to swim with the current rather than feeling like they&apos;re always swimming upstream against the current.  That leads quite nicely into the next benefit - productivity.

## Productivity

Consistency brings productivity into the picture as well. Developers don&apos;t have to worry as much about if they&apos;re doing it the &quot;right way&quot;. Components and services look the same overall, reusable application code is put in service classes, ES6/ES2015 modules organize related functionality and allow code to be self-contained and self-responsible, data is passed into components using input properties and can be passed out using output properties, etc.

With greater consistency, you get the added benefit of productivity. When you learn how to write one component you can write another following the same general guidelines and code structure. Once you learn how to create a service class it&apos;s easy to create another one. It&apos;s like a broken record consistently spinning round and round that feels like many other frameworks you may have used in the past. Combine all of this with the [Angular CLI](https://cli.angular.io/), code snippets that the team creates ([or use mine](https://blog.codewithdan.com/2016/08/30/angular-2-typescript-and-html-snippets-for-vs-code/) if you use [VS Code](http://code.visualstudio.com)) and you&apos;re consistent and productive.

![](/images/blog/5-key-benefits-of-angular-and-typescript/2017-08-26_19-34-18.webp)

If you use TypeScript to build your Angular applications then you also get several productivity benefits. In editors like [VS Code](https://code.visualstudio.com/) and [WebStorm](https://www.jetbrains.com/webstorm/specials/webstorm/webstorm.html) , you have access to robust code help(intellisense) as you type making it easier to discover types and the features they offer. If you use TypeScript interfaces, you can even get code help against the JSON data that comes back from calls to a back-end service. This is extremely helpful when various data/model objects are being used and manipulated by developers. TypeScript isn&apos;t only for Angular of course (you can use it with React, AngularJS, Vue.js, Node.js and any other JavaScript libraries/frameworks), but it integrates with Angular quite well.

## Maintainability

I love open source projects but am also sensitive to the fact that some people or groups running a project get tied up with other aspects of life and projects stop being maintained on occasion. If you&apos;ve worked with OSS projects very long you know the story there. Anytime I consider using a project I look at the number of contributors, the last time it was updated and scan through the issues to see if they&apos;re being handled with regularity. I try really hard to only use and recommend projects, modules, libraries, etc. that are actively supported so that my projects and my customers&apos; projects are easier to maintain and easier to keep up-to-date down the road.

Having a dedicated team at Google building Angular combined with the open source contributions from the community is a HUGE selling point for me personally. In today&apos;s &quot;flavor of the day&quot; world of JavaScript you never know what is going to be around tomorrow of course, so having that solid foundation backing the framework gives me more confidence. The fact that Google uses Angular quite heavily inside of the company for applications is another bonus. Many will say that the same could be said for React too....and they&apos;d be correct. But, this post is focused on Angular.

You may be thinking, &quot;But Dan - the jump from AngularJS 1.x to Angular 2+ was huge and definitely not good when it came to maintaining our existing app!&quot; Yes, that&apos;s a valid point - even with ng-upgrade options. The jump between AngularJS and Angular was a direct result of the JavaScript language making huge gains forward with ES6/ES2015 functionality, combined with new features that modern browsers can now support. Had the Angular team NOT made that jump we&apos;d be calling Angular the &quot;Caveman Framework&quot; in no time at all since it wouldn&apos;t be leveraging the latest and greatest features that can help with performance, consistency, productivity, maintainability and overall development. It was a move that I&apos;m glad the Angular team made. I don&apos;t have a crystal ball (if anyone has one I&apos;d be interested in borrowing it), but I do know that the Angular team is very aware of how framework changes affect enterprise projects (Angular is used a lot inside of Google). Based on what I&apos;ve heard from the team I&apos;m confident that they&apos;ll provide a smooth road going forward as new versions are released.

In addition to the solid backing Angular has going for it behind the scenes with the Angular team, when you add in the consistency features mentioned earlier you also get code that will be easier to maintain in production. I&apos;ve had production support responsibilities for many years in my career (used to have a good old pager back in the day) so I&apos;m really sensitive to being able to write code that is consistent and easy to maintain. With the proper style guide, training, and knowledge, a team can take any framework and create a consistent way of developing applications of course. That&apos;s true not only Angular but for many other frameworks out there as well I realize. But, Angular provides a very clear path for writing code as mentioned earlier in the &quot;consistency&quot; section which ultimately leads to simplified maintenance. If Jim or Jane goes on vacation (or changes jobs), Victor can step in and fix bugs that are found or handle change requests with confidence.

Angular code can be built using TypeScript (my preference) which provides a host of benefits, especially in the enterprise. See the &quot;Catch Errors Early&quot; section below for my thoughts on TypeScript and some of the maintenance benefits it brings to the table.

Whether your team does your own production support or hands it off to another group, being able to build applications that are consistent, easy to maintain, and that use a framework backed by a full-time development team combined with a robust open source community is a key priority for most enterprises.

## Modularity

![](/images/blog/5-key-benefits-of-angular-and-typescript/bucket.webp)

Angular is all about organizing code into &quot;buckets&quot;. Everything you create whether it&apos;s components, services, pipes, or directives has to be organized into one or more buckets. If you come from a  &quot;function spaghetti code&quot; background in your organization, the sanity that Angular and TypeScript bring to the table can be quite refreshing. The &quot;buckets&quot; I refer to are called &quot;modules&quot; in the Angular world. They provide a way to organize application functionality and divide it up into features and reusable chunks. Modules also offer many other benefits such as lazy loading as well where one or more application features are loaded in the background or on-demand.

Enterprise applications can grow quite large and the ability to divide the labor across multiple team members while keeping code organized is definitely achievable with Angular. Modules can be used to add organization into an application much like packages and namespaces do in other languages/frameworks like Java or .NET. I&apos;ll admit that a solid understanding of Angular modules and the way they can be used is crucial in order to use them successfully. However, once a team architects modules appropriately they can reap the benefits when it comes to the division of labor, code consistency, productivity, and maintenance.

## Catch Errors Early

[![](/images/blog/5-key-benefits-of-angular-and-typescript/oops-sign.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/08/oops-sign.jpg)Angular is built using TypeScript which brings many benefits to the table such as:

- TypeScript is a **superset** **of JavaScrip**t.TypeScript is not its own stand-alone language like CoffeeScript, Dart or others and that&apos;s super powerful. That means I can take existing ES5 or ES2015+ JavaScript code, plug it into a TypeScript .ts file (or even work with the .js file directly) and the code will work fine. TypeScript simply compiles/transpiles code down to ES5 or ES2015 depending on what you configure.
- TypeScript supports core ES2015 features as well as ES2016/ES2017 features like decorators, async/await and others. I like to think of it as ES2015++. See supported features at [http://kangax.github.io/compat-table/es6](http://kangax.github.io/compat-table/es6).
- TypeScript provides support for types (primitives, interfaces, and other custom types). Yes - there&apos;s a reason TypeScript has the word &quot;Type&quot; in its name. Although types are optional, they&apos;re highly recommended if you want to catch errors early on in the development lifecycle especially with larger applications. They make it much easier to see when something is passed or used incorrectly.
- TypeScript code can be debugged directly in the browser (or in an editor) as long as the proper map files are created during build time.
- TypeScript allows you to use classes and/or functional programming techniques. You&apos;re not limited to one way of doing things and can opt-out of any TypeScript specific features.

I could list many other features (visit [TypeScript&apos;s site](http://www.typescriptlang.org/docs/home.html) for more), but ultimately, the Angular framework is built using TypeScript and if you use it in your projects (which I highly recommend) then you can catch errors early in the development lifecycle or while performing maintenance tasks. When it comes to enterprise applications, TypeScript offers &quot;guard rails&quot; for your JavaScript code to ensure that developers and teams don&apos;t go &quot;off the cliff&quot; as they&apos;re building applications.

In addition to the benefits TypeScript offers, Angular is also built with testability in mind. The [Angular CLI](https://cli.angular.io) makes the process of unit testing and end-to-end testing a snap (it relies on Karma and Jasmine by default for unit tests but you can use whatever testing tools you&apos;d like). Simply type **ng test** at the command line and any tests in the project will run. The **ng generate** command will automatically generate a spec file for you as you create a component, service, etc. If your organization is planning to write unit tests against your Angular code that&apos;s definitely another benefit that will help you catch errors early on in the development lifecycle.

## Summary

There are certainly many more benefits that could be discussed, but I wanted to focus on 5 that I think are important to consider. Are these the &quot;correct&quot; benefits to focus on? That&apos;s a very subjective question of course. For my company and many others that we work with, consistency, maintainability, productivity, modularity, and the ability to catch errors early are definitely at the top of the list.

It&apos;s important to note that many of these benefits can be achieved using other JavaScript libraries/frameworks as well (refer to my &quot;one size does not fit all&quot; discussion). The goal of this post is **NOT** to infer that Angular is the only client-side framework that offers these benefits. Instead, the goal is to help those considering Angular and TypeScript to better understand some of the key benefits they offer.</content:encoded></item><item><title>Code with Dan Newsletter #2: Node.js, Microservices, Authentication, CSS and TypeScript/Angular</title><link>https://blog.codewithdan.com/code-with-dan-newsletter-2-node-js-microservices-authentication-css-and-typescriptangular/</link><guid isPermaLink="true">https://blog.codewithdan.com/code-with-dan-newsletter-2-node-js-microservices-authentication-css-and-typescriptangular/</guid><pubDate>Thu, 10 Aug 2017 00:00:00 GMT</pubDate><content:encoded>Edition #2 of the Code with Dan Web Development Newsletter is now out including a new video walk-through!

Topics in this edition:

- Node.js API authentication
- Microservices with Node.js and Express
- D3 graphics and charting
- TypeScript and Angular projects to learn from
- A minimalistic CSS library (when you want something smaller than Bootstrap)

Watch my video walk-through of the content covered in this newsletter here:

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/BtzX9ArrHyw&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

Not subscribed to the newsletter yet? Sign-up below!</content:encoded></item><item><title>Code with Dan Web Newsletter #1: PWAs, VueJS/Angular/React, MSBuild, Docker</title><link>https://blog.codewithdan.com/code-with-dan-newsletter-1/</link><guid isPermaLink="true">https://blog.codewithdan.com/code-with-dan-newsletter-1/</guid><pubDate>Tue, 18 Jul 2017 00:00:00 GMT</pubDate><content:encoded>I&apos;m rebooting the Code with Dan Newsletter but decided to keep it much more focused than before (less items in each edition). Here&apos;s a quick video overview of the what I have planned as well as the first 5 items covered in edition #1 of the newsletter:

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/_nhGT0irHX0&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

**Topics covered in this edition:**

- [A beginner’s guide to making Progressive Web Apps](http://codewithdan.me/2tAcaCs)
- [VueJS vs Angular vs ReactJS with Demos](http://codewithdan.me/2tAq73h)
- [MSBuild in .NET Core](http://codewithdan.me/2tALvFI) (Video)
- [Docker for .NET Developers Series](http://codewithdan.me/2tAFpoQ)
- [Dan’s Flipboard Magazines](https://blog.codewithdan.com) (scroll to the bottom)

Not subscribed to the newsletter? Sign-up below!</content:encoded></item><item><title>Q&amp;A with Silicon Prairie News About Docker Containers</title><link>https://blog.codewithdan.com/qa-with-silicon-prairie-news-about-docker-containers/</link><guid isPermaLink="true">https://blog.codewithdan.com/qa-with-silicon-prairie-news-about-docker-containers/</guid><pubDate>Fri, 14 Jul 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/qa-with-silicon-prairie-news-about-docker-containers/2017-07-14_10-59-58.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/07/2017-07-14_10-59-58.png)

I had the opportunity to talk with Silicon Prairie News about Docker and some of the key benefits it offers. They interviewed me ahead of the [Heartland Developer Conference (HDC) event](http://heartlanddc.com) where I&apos;ll be giving a keynote titled &quot;Docker Containers: What Every Developer Should Know&quot;. The questions they asked in the interview included:

1. How long have you been working with Docker?
2. Can you briefly explain Docker for developers and readers who may not have a current understanding of what it does?
3. Do you think this is something that all developers should at least be educating themselves on even if they don&apos;t use it on a day-to-day basis?
4. Would everyone on a development team have to be educated on Docker or can individual team members learn it and specialize in it.
5. Docker is designed to increase reliability and minimize problems during the development phase, but what other benefits are there to developers?
6. What systems do Docker containers run on?
7. Are there any resources for someone to get a basic understanding of Docker before HDC (Heartland Developers Conference)?

[![](/images/blog/qa-with-silicon-prairie-news-about-docker-containers/2017-07-14_09-43-39.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/07/2017-07-14_09-43-39.png)

You can read the full Silicon Prairie News interview at [http://siliconprairienews.com/2017/07/qa-hdc-keynote-speaker-docker-captain-dan-wahlin](http://siliconprairienews.com/2017/07/qa-hdc-keynote-speaker-docker-captain-dan-wahlin).</content:encoded></item><item><title>Angular, TypeScript and HTML Snippets for VS Code</title><link>https://blog.codewithdan.com/angular-2-typescript-and-html-snippets-for-vs-code/</link><guid isPermaLink="true">https://blog.codewithdan.com/angular-2-typescript-and-html-snippets-for-vs-code/</guid><pubDate>Sat, 01 Apr 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/angular-2-typescript-and-html-snippets-for-vs-code/angular-e1459634290861.webp)](https://blog.codewithdan.com/wp-content/uploads/2016/03/angular-e1459634290861.png)I use [VS Code](http://code.visualstudio.com) a lot in my development projects and recently put together a set of [Angular (v2 or higher) TypeScript and HTML snippets](https://marketplace.visualstudio.com/items?itemName=danwahlin.angular2-snippets) that can be used to enhance productivity when building Single Page Applications.  By typing the snippet prefix (which is “ag-”) in a TypeScript or HTML file you can quickly add the associated code into your file.

Here’s a list of the supported snippets:

## Angular TypeScript Snippets

```
ag-Bootstrap                     - Bootstrap snippet
ag-AppModule                     - Create the root app module (@NgModule) snippet
ag-AppFeatureModule              - Angular app feature module (@NgModule) snippet
ag-AppFeatureRoutingModule       - Angular app feature routing module (@NgModule) snippet
ag-CanActivateRoutingGuard       - Create a CanActivate routing guard snippet
ag-CanDeactivateRoutingGuard     - Create a CanDeactivate routing guard snippet
ag-Component                     - Component snippet
ag-HttpService                   - Service with Http snippet (deprecated)
ag-HttpClientService             - Service with HttpClient snippet
ag-InputProperty                 - @Input property snippet
ag-InputGetSetProperty           - @Input property with get/set snippet
ag-OutputEvent                   - @Output event snippet
ag-Pipe                          - Pipe snippet
ag-Routes                        - Angular routes snippet
ag-Route                         - Route definition snippet
ag-Service                       - Service snippet
ag-Subscribe                     - Observable subscribe snippet

```

 

## Angular HTML Template Snippets

```
ag-ClassBinding              - [class] binding snippet
ag-NgClass                   - [ngClass] snippet
ag-NgFor                     - *ngFor snippet
ag-NgForm                    - ngForm snippet
ag-NgIf                      - *ngIf snippet
ag-NgModel                   - [(ngModel)] binding snippet
ag-RouterLink                - Basic routerLink snippet
ag-RouterLinkWithParameter   - [routerLink] with route parameter snippet
ag-NgSwitch                  - [ngSwitch] snippet
ag-NgStyle                   - [ngStyle] snippet
ag-Select                    - select control using *ngFor snipppet
ag-StyleBinding              - [style] binding snippet

```

In addition to typing the snippet prefix, you can also press Ctrl+Space on Windows or Linux, or Cmd+Space on Mac to activate the snippets.

## Installing the Angular TypeScript and HTML Snippets

```
Windows:  Select Ctrl+P and then type: ext install angular2-snippets
Mac:      Select ⌘+P and then type: ext install angular2-snippets

```

After restarting the editor open a TypeScript file and type the &quot;ag-&quot; prefix to see the snippets.

NOTE: The VS Code extension gallery doesn&apos;t allow projects to be renamed after they are initially created so &quot;angular2-snippets&quot; will get you the latest version of the snippets even though &quot;2&quot; is in the name.

The following [walk-through](https://code.visualstudio.com/docs/editor/extension-gallery) provides additional details.</content:encoded></item><item><title>Behind the Scenes: Angular and ASP.NET Core Pluralsight Course Kickoff Video!</title><link>https://blog.codewithdan.com/behind-the-scenes-angular-and-asp-net-core-pluralsight-course-kickoff-video/</link><guid isPermaLink="true">https://blog.codewithdan.com/behind-the-scenes-angular-and-asp-net-core-pluralsight-course-kickoff-video/</guid><pubDate>Sat, 11 Mar 2017 00:00:00 GMT</pubDate><content:encoded>I&apos;ve had several people over the years ask about my workflow as I create video courses for [Pluralsight](https://pluralsight.com) and other sites so I thought I&apos;d provide some details. Since I&apos;m starting a brand new course for Pluralsight (Integrating Angular with ASP.NET Core RESTful Services) I filmed a few details about my equipment setup and also did a live recording (with bloopers!) of one of the video clips that will be used in the course. I know a lot of people are either interested in getting into video or simply want to know what really happens behind the scenes so I hope it&apos;s of use to you!

 

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/uxLjR-t2Is8&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>.NET Rocks Panel Interview: Docker, Containers and More</title><link>https://blog.codewithdan.com/net-rocks-panel-interview-docker-containers-and-more/</link><guid isPermaLink="true">https://blog.codewithdan.com/net-rocks-panel-interview-docker-containers-and-more/</guid><pubDate>Sat, 11 Feb 2017 00:00:00 GMT</pubDate><content:encoded>I had the chance to sit down with Carl and Richard from .NET Rocks as well as Michele Leroux Bustamante and Rick Van Rousselt while at the DevIntersection Europe conference to have a panel discussion on Docker, containers and more.

It was a lot of fun talking with everyone on the panel so feel free to tune if you&apos;re interested in Docker and containers (click on the image below to listen to it).

[Listen Here](http://dotnetrocks.com/?show=1386)</content:encoded></item><item><title>10 Angular and TypeScript Projects to Take You From Zero to Hero</title><link>https://blog.codewithdan.com/10-angular-and-typescript-projects-to-take-you-from-zero-to-hero/</link><guid isPermaLink="true">https://blog.codewithdan.com/10-angular-and-typescript-projects-to-take-you-from-zero-to-hero/</guid><pubDate>Wed, 08 Feb 2017 00:00:00 GMT</pubDate><content:encoded>There are a lot of great samples and posts out there to help get you started with Angular (version 2 or higher) as well as ES6/ES2015 and TypeScript. However, some are out of date, some may be more complex than you want to start with, and others have been abandoned and are no longer maintained. In this post I’m going to provide a list of 10 Angular/TypeScript projects that I’ve created that can take you from “Zero to Hero” if you like to explore project code and are interested in investing the time to learn.

The projects are listed in ![](/images/blog/10-angular-and-typescript-projects-to-take-you-from-zero-to-hero/angular.svg)order from beginner to intermediate/advanced level to show the sequence I’d recommend if you’d like to start out slow or jump to a more robust project. Many of these projects are used in my [**Angular and TypeScript Application Development**](https://www.codewithdan.com/products/angular-programming) instructor-led training course. I realize that not everyone can take the course which is why I’m listing the projects here to hopefully add value to the overall community.

On a related note, if you’re using the [VS Code editor](http://code.visualstudio.com/) (my preferred editor) check out my [Angular and TypeScript code snippets](https://blog.codewithdan.com/2016/08/30/angular-2-typescript-and-html-snippets-for-vs-code/) extension. The code snippets will save you a ton of time and significantly increase your productivity as you build Angular (v2 or higher) and TypeScript applications.

 

# Project 1: ES6/ES2015 Code Examples

 

| **Project URL:** | [https://github.com/DanWahlin/ES6Samples](https://github.com/DanWahlin/ES6Samples &quot;https://github.com/DanWahlin/ES6Samples&quot;) |
| --- | --- |
| **Level:** | Beginner |
| **Description:** | This project provides an introductory look at key features available in ES6/ES2015 which are important given that TypeScript is a superset of ES6/ES2015 features. All of the Angular projects below use TypeScript so if you’re new to the TypeScript language or to what ES6/ES2015 offers I’d recommend learning those concepts first before diving into Angular concepts.  Note that some people call it ES6, some call it ES2015, which is why I’m listing both here. ES6 is the same as ES2015. [](https://blog.codewithdan.com/wp-content/uploads/2017/02/ES6-1.png) |

 

 

# Project 2: ES6/ES2015 Modules Example

 

| **Project URL:** | [https://github.com/DanWahlin/ES6-Modules-Starter](https://github.com/DanWahlin/ES6-Modules-Starter &quot;https://github.com/DanWahlin/ES6-Modules-Starter&quot;) |
| --- | --- |
| **Level:** | Beginner |
| **Description:** | ES6/ES2015 modules play an important role in Angular (v2 or higher) applications so learning the basics about how modules work, how a module loader is configured, etc. is very important if you want to build Angular applications. This project provides a simple starter example of getting started with modules and the SystemJS module loader.  What is a module loader? Browser’s don’t support ES6/ES2015 modules quite yet so a “polyfill” script is needed to add-in the missing functionality. SystemJS is one of several module loaders out there that can be used to work with modules in browsers.  [](https://blog.codewithdan.com/wp-content/uploads/2017/02/ES6-modules-1.png) |

 

 

# Project 3: TypeScript Code Examples

 

| **Project URL:** | [https://github.com/DanWahlin/TypeScriptDemos](https://github.com/DanWahlin/TypeScriptDemos &quot;https://github.com/DanWahlin/TypeScriptDemos&quot;) |
| --- | --- |
| **Level:** | Beginner |
| **Description:** | This project provides an introductory look at various features that are available in ES2015 and TypeScript. All of the Angular projects below use TypeScript so if you’re new to the language I’d recommend learning it first before diving into Angular concepts.  If you’d like more formal training on TypeScript you can also check out the [TypeScript Fundamentals](https://www.pluralsight.com/courses/typescript) video course that my good friend John Papa and I created for Pluralsight.  [](https://blog.codewithdan.com/wp-content/uploads/2017/02/typescript-1.png) |

 

 

# Project 4: Angular Hello World

 

| **Project URL:** | [https://github.com/DanWahlin/Angular-HelloWorld](https://github.com/DanWahlin/Angular-HelloWorld &quot;https://github.com/DanWahlin/Angular-HelloWorld&quot;) |
| --- | --- |
| **Level:** | Beginner |
| **Description:** | The Angular Hello World project provides a simple starter project for people who are brand new to Angular (version 2 or higher) and TypeScript. It provides a basic look at the project structure, using package.json and npm to load Angular modules, as well as TypeScript compilation with tsconfig.json. The project only contains a single module and component so if you want to experiment with various Angular features in a simple environment then this project will work well for you.  **Bonus:** Although this is a very basic project, it provides support for Webpack and Angular’s [Ahead-of-Time](https://angular.io/docs/ts/latest/cookbook/aot-compiler.html) (AOT) compilation feature which is a great way to speed up the load time and minimize the number of scripts loaded in the browser (and there are several other great benefits). The feature isn’t used by default but details can be found in the readme file if you’re interested in getting it going.  [](https://blog.codewithdan.com/wp-content/uploads/2017/02/HelloWorld-1.png) |

 

 

# Project 5: Angular Bare Bones

 

| **Project URL:** | [https://github.com/DanWahlin/Angular-BareBones](https://github.com/DanWahlin/Angular-BareBones &quot;https://github.com/DanWahlin/Angular-BareBones&quot;) |
| --- | --- |
| **Level:** | Beginner |
| **Description:** | The Angular Bare Bones project takes things up a level from the Hello World project and adds basic Angular routing, multiple components as well as a simple service. It’s a good project for beginners to look through to see how many of the key features that Angular and TypeScript provide can be tied together while still keeping the code very simple overall.  [](https://blog.codewithdan.com/wp-content/uploads/2017/02/BareBones-1.png) |

# Project 6: Angular Template-Driven and Reactive Forms

 

| **Project URL:** | [https://github.com/DanWahlin/Angular-Forms](https://github.com/DanWahlin/Angular-Forms &quot;https://github.com/DanWahlin/Angular-Forms&quot;) |
| --- | --- |
| **Level:** | Beginner/Intermediate |
| **Description:** | The Angular Forms project shows how to get started with data binding in forms using ngModel (template-driven approach) or the Reactive forms approach where form controls are defined in code and bound into the UI. Examples of custom validators are also included as well as examples of binding to different types of form controls and accessing submitted data.  If you’re interested in learning more about template-driven and reactive forms and how they can be used with a backend service, check out my [Integrating Angular with Node.js RESTful Services](https://www.pluralsight.com/courses/angular-nodejs-restful-services) course on Pluralsight.  [](https://blog.codewithdan.com/wp-content/uploads/2017/02/Forms-1.png) |

 

 

# Project 7: Angular JumpStart

 

| **Project URL:** | [https://github.com/DanWahlin/Angular-JumpStart](https://github.com/DanWahlin/Angular-JumpStart &quot;https://github.com/DanWahlin/Angular-JumpStart&quot;) |
| --- | --- |
| **Level:** | Beginner/Intermediate/Advanced |
| **Description:** | The Angular JumpStart project provides a complete application that demonstrates many key features provided by the Angular framework. Some of the project features include:  - TypeScript classes and modules - Modules are loaded with System.js - Defining routes including child routes and lazy loaded routes - Using Custom Components including custom input and output properties - Using Custom Directives - Using Custom Pipes - Defining Properties and Using Events in Components/Directives - Using the Http object for Ajax calls along with RxJS observables - Working with Utility and Service classes (such as for sorting and Ajax calls) - Using Angular databinding Syntax \[\], () and \[()\] - Using template-driven and reactive forms functionality for capturing and validating data - Optional: Webpack functionality is available for module loading and more (see the readme for details) - Optional: Ahead-Of-Time (AOT) support is available (see the readme for details) - Much more!  This is one of the key projects used in our [**Angular and TypeScript Application Development**](https://www.codewithdan.com/products/angular-programming) instructor-led training class.     [](https://blog.codewithdan.com/wp-content/uploads/2017/02/cards-1.png)     [](https://blog.codewithdan.com/wp-content/uploads/2017/02/details-1.png)     [](https://blog.codewithdan.com/wp-content/uploads/2017/02/orders-1.png) |

 

 

# Project 8: Angular, Node.js RESTful Services and MongoDB

 

| **Project URL:** | [https://github.com/DanWahlin/Angular-NodeJS-MongoDB-CustomersService](https://github.com/DanWahlin/Angular-NodeJS-MongoDB-CustomersService &quot;https://github.com/DanWahlin/Angular-NodeJS-MongoDB-CustomersService&quot;) |
| --- | --- |
| **Level:** | Beginner/Intermediate |
| **Description:** | This project shows how Angular can be used to integrate with a Node.js RESTful service that uses MongoDB as the backend database. The application relies on an Angular service that can perform CRUD (create, read, update and delete) operations and also demonstrates using both template-driven and reactive forms. Observables and RxJS play a key role in the async operations that the application performs.  The project can be run locally or using Docker containers. For more information on the concepts covered in this project, check out my [Integrating Angular with Node.js RESTful Services](https://www.pluralsight.com/courses/angular-nodejs-restful-services) course on Pluralsight.  [](https://blog.codewithdan.com/wp-content/uploads/2017/02/Angular-Node1-1.png)  [](https://blog.codewithdan.com/wp-content/uploads/2017/02/Angular-Node2-1.png) |

 

 

# Project 9: Docker, Angular, Node.js RESTful Services and MongoDB

 

| **Project URL:** | [https://github.com/DanWahlin/Angular-RESTfulService](https://github.com/DanWahlin/Angular-RESTfulService &quot;https://github.com/DanWahlin/Angular-RESTfulService&quot;) |
| --- | --- |
| **Level:** | Beginner/Intermediate |
| **Description:** | This is another project that shows how Angular can be used to integrate with a Node.js RESTful service that uses MongoDB as the backend database. This particular project relies on Docker and Docker Compose to startup the Node.js and MongoDB containers used to run the application.  For more information on the Docker or Angular concepts covered in this project, check out my [Docker for Web Developers](https://www.pluralsight.com/courses/docker-web-development) or [Integrating Angular with Node.js RESTful Services](https://www.pluralsight.com/courses/angular-nodejs-restful-services) courses on Pluralsight.  [](https://blog.codewithdan.com/wp-content/uploads/2017/02/Angular-Node-Docker-1.png) |

 

 

# Project 10: Docker, Angular, ASP.NET Core RESTful Services and PostgreSQL

 

| **Project URL:** | [https://github.com/DanWahlin/AspNetCorePostgreSQLDockerApp](https://github.com/DanWahlin/AspNetCorePostgreSQLDockerApp &quot;https://github.com/DanWahlin/AspNetCorePostgreSQLDockerApp&quot;) |
| --- | --- |
| **Level:** | Beginner/Intermediate |
| **Description:** | This project shows how Angular can be used to integrate with an ASP.NET Core RESTful service that uses PostgreSQL as the backend database. The application is designed to be run using Docker containers and started up using Docker Compose.  For more information on the Docker or Angular concepts covered in this project, check out my [Docker for Web Developers](https://www.pluralsight.com/courses/docker-web-development) or [Integrating Angular with Node.js RESTful Services](https://www.pluralsight.com/courses/angular-nodejs-restful-services) courses on Pluralsight.  [](https://blog.codewithdan.com/wp-content/uploads/2017/02/Angular-ASPNET-1.png) |</content:encoded></item><item><title>New Pluralsight Course - Integrating Angular with Node.js RESTful Services</title><link>https://blog.codewithdan.com/new-pluralsight-course-integrating-angular-with-node-js-restful-services/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-pluralsight-course-integrating-angular-with-node-js-restful-services/</guid><pubDate>Tue, 31 Jan 2017 00:00:00 GMT</pubDate><content:encoded>[![](/images/blog/new-pluralsight-course-integrating-angular-with-node-js-restful-services/pluralsight-integrating-angular-node.webp)](https://blog.codewithdan.com/wp-content/uploads/2017/01/pluralsight-integrating-angular-node.png)

I&apos;m excited to announce the release of a new course on [Pluralsight](https://www.pluralsight.com/courses/angular-nodejs-restful-services) titled **Integrating Angular with Node.js RESTful Services**! This covers Node.js 6.10 or higher and Angular 4 or higher.

In this course I&apos;ll walk you through the process of using Angular to call into RESTful services and perform CRUD (Create, Read, Update and Delete) operations in an application to allow a user to view and modify data. If you&apos;ve wondered about how Angular services work, how to organize modules, the role of Observables and RxJS in async operations, how Angular&apos;s Http client can be used to make async calls, how to create and validate Angular forms, how to work with headers and page data, techniques for preventing CSRF attacks or simply what&apos;s involved with creating a RESTful service, then this course will provide the information you need to get started. Node.js and Express are used along with MongoDB on the server-side. However, the Angular concepts covered throughout the course can be used to call any RESTful service regardless of technology

So what&apos;s in the course? Here&apos;s a synopsis of the key topics as well as the course modules. If you&apos;re a [Pluralsight](https://www.pluralsight.com/courses/angular-nodejs-restful-services) subscriber I really hope you enjoy the course. I had a lot of fun putting it together and filming it!

 

&lt;iframe class=&quot;video-player&quot; src=&quot;https://www.youtube.com/embed/fv0DkXrrpAI?rel=0&quot; width=&quot;300&quot; height=&quot;150&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

 

# Integrating Angular with Node.js RESTful Services

Learn how to build an Angular and Node.js application that can perform create, read, update and delete (CRUD) operations. Topics covered include building RESTful services with Node.js and Express, manipulating data in MongoDB and consuming services with Angular (note that Angular 2 or higher is covered and that TypeScript is used).

 

## **Key Angular Topics Covered:**

- TypeScript and how it can be used in Angular applications
- Code organization with Angular modules
- The role of ES2015 module loaders in Angular applications
- Promises versus Observables
- Learn how Observables work (a nice visual explanation is shown) and how to subscribe to them
- Learn how to create and use Angular services
- Angular&apos;s Http client and how it can be used to call into RESTful services
- Differences between Template-driven and Reactive forms in Angular
- Directives used in Template-driven forms and how to use them for two-way data binding
- Directives and code used in Reactive forms
- Form validation techniques and custom validators
- How to build custom components and leverage Input and Output properties
- Working with headers sent by the server
- Building a custom pagination component
- CSRF attacks and how Angular can help

 

## **Key Node.js Topics Covered:**

- Understand GET, POST, PUT and DELETE and the role each plays with RESTful services
- ES2015 features and how they can help organize code in Node.js applications
- Create RESTful services capable of supporting CRUD operations using Node.js and Express
- Use Mongoose to connect Express to MongoDB
- Load Express routes dynamically (convention or configuration technique)
- Paging data
- Working with headers
- Preventing CSRF attacks

 

## **Course Modules:**

1. **Course Introduction**
    - Pre-requisites to Maximize Learning
    - Learning Goals
    - Server-side Technologies and Concepts
    - Client-side Technologies and Concepts
    - Running the Application
    - Running the Application with Docker
2. **Exploring the Node.js and Angular Application**
    - Exploring the Project Structure
    - Application Modules
    - Configuring Node.js Routes
    - Configuring the ES Module Loader
    - Angular Modules, Components and Services
3. **Retrieving Data Using a GET Action**
    - Creating a GET Action to Return Multiple Customers
    - Creating a GET Action to Return a Single Customer
    - Making GET Requests with an Angular Service
    - Displaying Customers in a Grid
    - Displaying a Customer in a Form
    - Converting to a &apos;Reactive&apos; Form
4. **Inserting Data Using a POST Action**
    - Creating a POST Action to Insert a Customer
    - Making a POST Request with an Angular Service
    - Modifying the Customer Form to Support Inserts
    - Exploring the &apos;Reactive&apos; Form
5. **Updating Data Using a PUT Action**
    - Creating a PUT Action to Update a Customer
    - Making a PUT Request with an Angular Service
    - Modifying the Customer Form to Support Updates
    - Exploring the &apos;Reactive&apos; Form
6. ****Deleting Data Using a DELETE Action****
    - Creating a DELETE Action to Delete a Customer
    - Making a DELETE Request with an Angular Service
    - Modifying the Customer Form to Support Deletes
    - Exploring the &apos;Reactive&apos; Form
7. ****Data Paging, HTTP Headers and CSRF****
    - Adding a Paging Header to a RESTful Service Response
    - Accessing Headers and Data in an Angular Service
    - Adding Paging Support to a Component
    - Adding a Paging Component
    - CSRF Overview
    - Adding CSRF Functionality with csurf
    - Using a csurf Token in an Angular Service</content:encoded></item><item><title>.NET Rocks Interview:  Angular 2, ASP.NET Core and Docker</title><link>https://blog.codewithdan.com/net-rocks-interview-angular-2-asp-net-core-and-docker/</link><guid isPermaLink="true">https://blog.codewithdan.com/net-rocks-interview-angular-2-asp-net-core-and-docker/</guid><pubDate>Tue, 04 Oct 2016 00:00:00 GMT</pubDate><content:encoded>I always love jumping on the .NET Rocks podcast with Carl and Richard! Great guys who always make it a lot of fun. In my latest interview on the show we talk about ASP.NET Core, Angular 2, Docker and a bunch of other topics along the way. Click on the image below to listen.

[![dotnetrocks09-2016](/images/blog/net-rocks-interview-angular-2-asp-net-core-and-docker/dotnetrocks09-2016.webp)](http://www.dotnetrocks.com/default.aspx?ShowNum=1354)</content:encoded></item><item><title>Angular 2 Meetup in Barcelona with Dan Wahlin and John Papa</title><link>https://blog.codewithdan.com/angular-2-meetup-in-barcelona-with-dan-wahlin-and-john-papa/</link><guid isPermaLink="true">https://blog.codewithdan.com/angular-2-meetup-in-barcelona-with-dan-wahlin-and-john-papa/</guid><pubDate>Tue, 19 Jul 2016 00:00:00 GMT</pubDate><content:encoded>[![barcelonaMeetup](/images/blog/angular-2-meetup-in-barcelona-with-dan-wahlin-and-john-papa/barcelonaMeetup.webp)](https://blog.codewithdan.com/wp-content/uploads/2016/07/barcelonaMeetup.jpg)I&apos;m excited to announce that John Papa and I will be presenting at an Angular 2 meetup on July 31st, 2016 at 19:00 in Barcelona! We&apos;re going to be there doing a full-day [Ultimate Angular 2 Workshop](https://blog.codewithdan.com/2016/05/14/angular-2-workshop-in-barcelona-july-31st-2016/) and wanted to organize a meetup for people who aren&apos;t able to attend.

### Event Details

- **What?** Angular 2 Meetup in Barcelona
- **Where?** [Can Jaumandreu - UB Parc de les Humanitats i les Ciències Socials, Carrer del Perú, 52, 08018 Barcelona](http://maps.google.com/maps?q=41.4048%2C2.19365+%28Can+Jaumandreu+-+UB+Parc+de+les+Humanitats+i+les+Ci%C3%A8ncies+Socials%2C+Carrer+del+Per%C3%BA%2C+52%2C+08018+Barcelona%29)
- **When?** July 31, 2016 (19:00 - 21:00)
- **Who?** Learn from [Dan Wahlin](http://twitter.com/danwahlin), [John Papa](http://twitter.com/john_papa) and others!
- **How do I Register?** [Register Here](https://ti.to/angularbeers/angularbeers-with-john-papa-and-dan-wahlin) (limited spots available)

If you&apos;re not able to make it to our full-day [Angular 2 workshop](https://blog.codewithdan.com/2016/05/14/angular-2-workshop-in-barcelona-july-31st-2016/) then we hope you&apos;ll be able to come to the meetup! We&apos;re really looking forward to visiting Barcelona and meeting everyone.</content:encoded></item><item><title>New Pluralsight Course - Play by Play: Docker for Web Developers</title><link>https://blog.codewithdan.com/new-pluralsight-course-play-by-play-docker-for-web-developers/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-pluralsight-course-play-by-play-docker-for-web-developers/</guid><pubDate>Thu, 14 Jul 2016 00:00:00 GMT</pubDate><content:encoded>[![dockerPlayByPlay2016](/images/blog/new-pluralsight-course-play-by-play-docker-for-web-developers/dockerPlayByPlay2016.webp)](https://app.pluralsight.com/library/courses/play-by-play-docker-web-developers-john-papa-dan-wahlin/table-of-contents)

I recently had the opportunity to sit down with my friend John Papa and talk about how Docker can be used to boost Web development productivity. In this [Play by Play course on Pluralsight](https://app.pluralsight.com/library/courses/play-by-play-docker-web-developers-john-papa-dan-wahlin/table-of-contents) we discuss what Docker is and how you can get started using it by installing [Docker Toolbox](https://www.docker.com/products/docker-toolbox) or [Docker for Mac/Windows](https://www.docker.com/products/docker#/). We discuss what images and containers are and the role they play, talk about the layered file system, as well as how you can quickly and easily get a full development environment up and running on your dev machine with Docker Compose. This same environment can easily be moved to staging and production environments as well.

[![docker-magazine-cover](/images/blog/new-pluralsight-course-play-by-play-docker-for-web-developers/docker-magazine-cover.webp)](https://blog.codewithdan.com/wp-content/uploads/2016/04/docker-magazine-cover.jpg) If you haven&apos;t looked into Docker yet this might be the best 1.5 hours you can spend! Docker is a game changer for my development projects and is definitely one of my favorite technologies. [Check out the Play by Play video here](https://app.pluralsight.com/library/courses/play-by-play-docker-web-developers-john-papa-dan-wahlin/table-of-contents).

If you want additional information about Docker and how it can be used in your Web development workflow check out my full [Docker for Web Developers course](https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents).</content:encoded></item><item><title>Developer Bliss with Docker for Mac &amp; Docker for Windows</title><link>https://blog.codewithdan.com/developer-bliss-with-docker-for-mac-and-windows/</link><guid isPermaLink="true">https://blog.codewithdan.com/developer-bliss-with-docker-for-mac-and-windows/</guid><pubDate>Mon, 20 Jun 2016 00:00:00 GMT</pubDate><content:encoded>![docker-logo-240](/images/blog/developer-bliss-with-docker-for-mac-and-windows/docker-logo-240.webp)I&apos;m a huge fan of [Docker](https://www.docker.com/) and am using it a lot in various projects now. In fact the [https://blog.codewithdan.com](https://blog.codewithdan.com) site is running in Docker containers on [Azure](https://azure.microsoft.com). Three containers are used and managed with [Docker Cloud](https://cloud.docker.com/_/dashboard/onboarding):

1. nginx container
2. Wordpress container
3. MariaDB (MySql) container

What&apos;s great about Docker is that I can have a local version of my blog up and running on my dev machine in a matter of minutes and it mirrors production. That makes it easy to test out various Wordpress changes (plugin and theme updates for example) rather than trying them on my production server which can be scary! If you&apos;re working in an enterprise environment this capability is especially useful with Line of Business apps that may require a lot of moving parts such as a reverse proxy, one of more running web servers, databases, caching servers, etc.

If you&apos;re new to Docker, getting started with it has always been pretty straightforward using [Docker Toolbox](https://www.docker.com/products/docker-toolbox), but with [Docker for Mac](https://docker.com/getdocker) and [Docker for Windows](https://docker.com/getdocker) getting started with Docker is even easier now! With Docker Toolbox you have to use Docker Machine to get VirtualBox up and running and &quot;linked&quot; into a command window on Mac or Windows. That&apos;s fairly easy to do actually but does require more upfront work (albeit minimal). With Docker for Mac and Docker for Windows it&apos;s even easier and also more efficient to run on your dev box.

### Key Benefits of Docker for Mac and Docker for Windows

Docker for Mac and Docker for Windows provide several key benefits especially to developers using Docker in their workflow:

- Faster and more efficient (more native and lightweight compared to VirtualBox)
- OS level integration that leverages native networking, virtualization and file system features
- Use the localhost network to access containers
- Auto-update as new releases come out
- Run Docker commands from any command-line window
- File change notifications (used with volumes) work consistently (which is great for devs using containers to do local development)

Both the Mac and Windows apps use OS-optimized virtualization services including xhyve on Mac and Hyper-V on Windows. You won&apos;t have to worry about using Docker Machine to setup the environment (as mentioned earlier) but can still use standard Docker Client commands as well as Docker Compose commands in any command window. You can also easily restart the VM and configure it (CPUs, memory, etc.) by clicking on the icon in the toolbar on Mac or tray on Windows:

### Docker for Mac

[![dockerMac](/images/blog/developer-bliss-with-docker-for-mac-and-windows/dockerMac.webp)](https://blog.codewithdan.com/wp-content/uploads/2016/06/dockerMac.png)

 

### Docker for Windows

[![dockerWindows](/images/blog/developer-bliss-with-docker-for-mac-and-windows/dockerWindows.webp)](https://blog.codewithdan.com/wp-content/uploads/2016/06/dockerWindows.png)

 

On Mac you can go to **Preferences** to set the number of CPUs and amount of memory you&apos;d like the underlying VM use:

[![dockerMacPrefs](/images/blog/developer-bliss-with-docker-for-mac-and-windows/dockerMacPrefs.webp)](https://blog.codewithdan.com/wp-content/uploads/2016/06/dockerMacPrefs.png)

 

Windows provides a **Settings** option:

[![dockerWindowsPrefs](/images/blog/developer-bliss-with-docker-for-mac-and-windows/dockerWindowsPrefs.webp)](https://blog.codewithdan.com/wp-content/uploads/2016/06/dockerWindowsPrefs.png)

 

[Docker for Mac](https://docker.com/getdocker) and [Docker for Windows](https://docker.com/getdocker) make it even easier to get started using Docker on your development machine.  Since it&apos;s lightweight and fast I have it setup to automatically start as I login and keep it up and running on my machine. What&apos;s really nice is that once it&apos;s installed on your dev box you can open a command window and use Docker directly without any additional configuration:

[](https://blog.codewithdan.com/wp-content/uploads/2016/06/dockerExample.png)[![dockerExample](/images/blog/developer-bliss-with-docker-for-mac-and-windows/dockerExample-1024x427.webp)](https://blog.codewithdan.com/wp-content/uploads/2016/06/dockerExample.png)

If you haven&apos;t looked into Docker yet, there&apos;s no time like the present. Once you understand how it works you&apos;ll find that it&apos;s quite amazing and great to add into your development workflow (definitely my favorite technology over the past few years). If you&apos;d like to learn more about Docker from a developer standpoint check out my [Docker for Web Developers](https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents) course on Pluralsight!

[![dockerForWebDevs](/images/blog/developer-bliss-with-docker-for-mac-and-windows/dockerForWebDevs.webp)](https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents)</content:encoded></item><item><title>Adventures in Angular Podcast Interview: TypeScript and Angular 2</title><link>https://blog.codewithdan.com/adventures-in-angular-podcast-interview-typescript-and-angular-2/</link><guid isPermaLink="true">https://blog.codewithdan.com/adventures-in-angular-podcast-interview-typescript-and-angular-2/</guid><pubDate>Thu, 09 Jun 2016 00:00:00 GMT</pubDate><content:encoded>![aialogo](/images/blog/adventures-in-angular-podcast-interview-typescript-and-angular-2/aialogo.webp)

 

I always enjoy talking with the [Adventures in Angular](https://devchat.tv/adv-in-angular/096-aia-angular-2-and-typescript-with-dan-wahlin) podcast hosts - great group of guys. In my latest interview on the show I talk about TypeScript and Angular 2 and our favorite features. Check out the podcast below.

[![aia](/images/blog/adventures-in-angular-podcast-interview-typescript-and-angular-2/aia.webp)](https://devchat.tv/adv-in-angular/096-aia-angular-2-and-typescript-with-dan-wahlin)</content:encoded></item><item><title>Video - TypeScript: Angular 2&apos;s Secret Weapon</title><link>https://blog.codewithdan.com/video-typescript-angular-2s-secret-weapon/</link><guid isPermaLink="true">https://blog.codewithdan.com/video-typescript-angular-2s-secret-weapon/</guid><pubDate>Mon, 06 Jun 2016 00:00:00 GMT</pubDate><content:encoded>![typescript_ng-conf_talk](/images/blog/video-typescript-angular-2s-secret-weapon/typescript_ng-conf_2016_talk.webp)

I had the awesome opportunity to speak at [ng-conf](http://ng-conf.org) (one of my favorite conferences) about [TypeScript](http://typescriptlang.org) and some of the key features it provides that are used in [Angular 2](http://angular.io). While several of the features covered are available (and have been available for years) in server-side languages such as C#, Java and others, developers accustomed to writing browser-centric applications haven&apos;t been able to leverage these features in the past. TypeScript provides a great way to take advantage of ES6 features while also providing support for some of the features I discuss in the talk such as interfaces, generics, type support and more. Angular 2 is of course built using TypeScript and relies on these features behind the scenes.

You can view the talk below:

## TypeScript: Angular 2&apos;s Secret Weapon

&lt;iframe src=&quot;https://www.youtube.com/embed/e3djIqAGqZo?list=PLOETEcp3DkCq788xapkP_OU-78jhTf68j&quot; width=&quot;640&quot; height=&quot;480&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>Yet Another Podcast Interview: TypeScript and Angular 2</title><link>https://blog.codewithdan.com/yet-another-podcast-interview-typescript-and-angular-2/</link><guid isPermaLink="true">https://blog.codewithdan.com/yet-another-podcast-interview-typescript-and-angular-2/</guid><pubDate>Sat, 21 May 2016 00:00:00 GMT</pubDate><content:encoded>I recently had the opportunity to chat with my good friend Jesse Liberty on his [Yet Another Podcast](http://jesseliberty.com/2016/05/21/yet-another-podcast-154-dan-wahlin-on-typescript-and-angular-2/) show about TypeScript and the role it can play in JavaScript-centric applications. We talked about the benefits that TypeScript offers (much more than just strong “types”) as well as how TypeScript fits into the overall Angular 2 picture. Jesse’s an experienced podcast host who asks great questions and makes it easy to chat about any subject – TypeScript in this case!

You can [listen to the interview here](http://jesseliberty.com/2016/05/21/yet-another-podcast-154-dan-wahlin-on-typescript-and-angular-2/).

 

[](http://jesseliberty.com/2016/05/21/yet-another-podcast-154-dan-wahlin-on-typescript-and-angular-2/)</content:encoded></item><item><title>Angular 2 Workshop in Barcelona July 31st, 2016</title><link>https://blog.codewithdan.com/angular-2-workshop-in-barcelona-july-31st-2016/</link><guid isPermaLink="true">https://blog.codewithdan.com/angular-2-workshop-in-barcelona-july-31st-2016/</guid><pubDate>Sat, 14 May 2016 00:00:00 GMT</pubDate><content:encoded>I&apos;m excited to announce that John Papa and I will be giving a full-day Angular 2 workshop in the beautiful city of Barcelona on July 31st, 2016! John and I are both really excited about the opportunity to come visit Spain and look forward to sharing our knowledge, expertise and passion with everyone. Read on to learn more about the workshop and what we&apos;ll be covering.

[![barcelona-workshop2016](/images/blog/angular-2-workshop-in-barcelona-july-31st-2016/barcelona-workshop2016.webp)](https://ti.to/angularcamp/angularcamp-ng2workshop/discount/NG100)

 

- **What?** Learn Angular 2
- **Where?** University of Barcelona
- **When?** July 31, 2016 (9 am - 4 pm)
- **Who?** Learn from  [John Papa](http://twitter.com/john_papa) and [Dan Wahlin](http://twitter.com/danwahlin)
- **How do I Register?** [Register Here](https://ti.to/angularcamp/angularcamp-ng2workshop/discount/NG100)

There are a limited number of discounted tickets available for the workshop (the register link above already applies the discount).

 

## Free Meet Up After the Workshop

If you&apos;re not able to make it to the Ultimate Angular 2 workshop, John and I will also be giving talks at a free meet up event scheduled for the night of the workshop (July 31st, 2016). Register for this free event at [https://ti.to/angularbeers/angularbeers-with-john-papa-and-dan-wahlin](https://ti.to/angularbeers/angularbeers-with-john-papa-and-dan-wahlin).

 

## Video Interview with Dan Wahlin, John Papa and David Pich

&lt;iframe src=&quot;https://www.youtube.com/embed/1awD6ADnEWo&quot; width=&quot;560&quot; height=&quot;315&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

 

## **The Ultimate Angular 2 Workshop**

![ng-conf-11-sharper](/images/blog/angular-2-workshop-in-barcelona-july-31st-2016/ng-conf-11-sharper-1024x302.webp)

Interested in spending a fun-filled day learning how to build **Single Page Applications** (SPAs) with **Angular 2** and [Google Developer Experts](https://developers.google.com/experts/all/technology/web-technologies) **John Papa** and **Dan Wahlin**? John and Dan will be in Barcelona for a full day of Angular 2 and look forward to sharing their knowledge, expertise and passion with everyone.

By learning Angular 2 and Single Page Applications you can provide a robust “desktop-like” experience for users while leveraging a web deployment model. Whether you’ve been building Angular 1 applications or you’re brand new to Angular, this workshop will provide a ground-up approach to learning Angular 2 concepts and by the end of the day you&apos;ll understand how all of the &quot;pieces&quot; fit together.

The workshop explores the core concepts that will help you build end-to-end SPA solutions including the role of ES6/TypeScript, project setup, code structure, using data binding and MV\*, abstracted remote data calls through services, observables, routing and more. You’ll see several demos (and be provided with the code) throughout the workshop that will help you learn and understand the Angular 2 framework.

This workshop covers:

- SPA fundamentals
- Why Angular 2?
- ES6/TypeScript fundamentals
- Transpiling/compiling TypeScript to JavaScript
- How to get started with the Angular CLI
- Bootstrapping your application
- The Role of Components
- Using Annotations in Components
- Dependency injection and providers
- Binding data to views
- Input and output properties
- Managing remote data calls using services
- Http and RxJS Observables
- Getting started with the Angular CLI
- Routing and page navigation

If you&apos;re looking to get a jump start on Angular 2 and learn about how you can use to build robust SPAs then come join us for a day of learning and fun!

**This workshop will have some hands-on aspects so bring your own laptop.**

### **[Register Here](https://ti.to/angularcamp/angularcamp-ng2workshop/discount/NG100)**

 

**About Dan Wahlin**

Dan Wahlin is President and Chief Architect at Wahlin Consulting which provides consulting and training services on Web technologies such as JavaScript, Angular, Node.js, C#, ASP.NET MVC, Web API and Docker. He’s also published over 10 developer courses on Pluralsight.com and Udemy.com. Dan is a Google Developer Expert, Docker Captain, Microsoft MVP and Regional Director and speaks at conferences and user groups around the world.  Dan has written multiple books on Web technologies, hundreds of technical articles and blog posts ([http://blog.codewithdan.com](http://blog.codewithdan.com/)) and runs the Code with Dan newsletter (a great way to stay up on the latest technologies - sign up at his blog!). Follow Dan on Twitter [@DanWahlin](http://twitter.com/danwahlin).

 

**About John Papa**

John Papa is a Microsoft Regional Director, MVP, and Google Developer Expert for Angular. He is the author of 100+ articles and 10 books, and can often be found speaking around the world at keynotes and sessions for conferences such as NgConf, Build, TechEd, VSLive and AngleBrackets. John is a co-host of the popular Adventures in Angular podcast, author of the Angular Style Guide, and of many popular Pluralsight courses.

 

Special thanks to [Angular Beers](https://twitter.com/AngularBeers), for sponsoring us!

[![angular-beers](/images/blog/angular-2-workshop-in-barcelona-july-31st-2016/angular-beers.webp)](https://twitter.com/AngularBeers)</content:encoded></item><item><title>New Pluralsight Course: Docker for Web Developers</title><link>https://blog.codewithdan.com/new-pluralsight-course-docker-for-web-developers/</link><guid isPermaLink="true">https://blog.codewithdan.com/new-pluralsight-course-docker-for-web-developers/</guid><pubDate>Sun, 03 Apr 2016 00:00:00 GMT</pubDate><content:encoded>## How I Got Into Docker (and why you should too)

One of the most exciting technologies that I’ve researched and used over the past year is [Docker](http://www.docker.com). That’s a pretty bold statement, especially since I enjoy working with a lot of different technologies, so let me share a quick story about how I initially got started with Docker and and my personal journey (if you’d prefer you can [jump directly to the Docker for Web Developers Course](https://www.pluralsight.com/courses/docker-web-development)).

You may have heard the term “Docker”[](https://blog.codewithdan.com/wp-content/uploads/2017/02/docker-logo-240.png) tossed around at work or online and wondered what it was…I know I did. I heard a few people mention it on Twitter a few years ago making claims like, “Docker provides a consistent way to deploy applications.” That sounded appealing but wasn’t enough to make me jump into it. I’ve worked in a lot of dev environments over the course of my career and have certainly run into challenges moving software between development, staging and production

[  
](https://blog.codewithdan.com/wp-content/uploads/2017/02/docker-logo-240-1.png)

environments so the general promise of Docker certainly intrigued me. But, when it came down to it I only had so many hours in the day to research new technologies so I pushed it off. It kept coming up again and again though so I realized I needed to make some time to look into it more.

Fast-forward a few more weeks and one night I decided to visit the [Docker website](http://www.docker.com) and do a little reading on the features it offered. The additional features I learned about continued to interest me but at the time I felt like Docker was aimed squarely at SysAdmins which wasn’t that exciting to me - no offense to my SysAdmin friends. :-) It was also 100% Linux-based when it first came out and I wasn’t a “Linux guy” back then. I was super comfortable with Mac and Windows but “Linux” just wasn’t my thing. I put off looking into it more but decided to sign-up for the [Docker Newsletter](https://www.docker.com/newsletter-subscription) just to keep an eye on it and see if anything interesting showed up. I read a lot of great articles that listed some of the key features:  
  

[](https://blog.codewithdan.com/wp-content/uploads/2017/02/dockerFeatures.png)

As I dug in deeper I learned that Docker allows “Images” published up on [Docker Hub](https://hub.docker.com/) to be used to create instances of running “Containers”. An image is a super efficient way to get a given framework (Node.js, ASP.NET Core, PHP, etc.), database servers, caching servers and much more up and running on your dev machine or even in production. What’s so cool about the Docker technology is that “Images” aren’t the same thing as the “Virtual Machine Images” you may have used before. In fact, Docker Images are very different from Virtual Machine Images as they’re typically smaller and more efficient to start and stop. A Docker Image isn’t a full OS as with Virtual Machines – it’s a layer that’s added onto a host OS so there are a lot of benefits that Docker Images bring to the table.  I’ll save that discussion for another post though. In the meantime, here’s a quick summation of Docker Images and Containers for you:

[](https://blog.codewithdan.com/wp-content/uploads/2017/02/dockerImagesContainers.png)

This still sounds/looks rather SysAdmin-ish though right? That’s what I thought initially too. Let’s get to the good stuff that can impact you and your development projects.

## How Docker Can Help Web Developers (and many others)

One day I saw an article in the [Docker Newsletter](https://www.docker.com/newsletter-subscription) that focused on a few key benefits that Docker offered to web developers. After reading it and thinking it over more I had one of those “light bulb moments” and thought, “Hey, this technology can be used to setup a consistent development environment quickly and easily! How cool is that?” What do I mean by that exactly?

Let’s say that you need to get Node.js, MongoDB, nginx, Redis and possibly more up and running on your dev machine. In addition to getting these requirements all installed properly, you also have to worry about getting security, configuration and more in place. To top it off, everyone on your team needs to do the same thing on their machines. Do you all have the correct versions of the servers and frameworks installed? What happens in a month or two when a new version comes out for a server/framework that everyone decides to move to? Is it easy to move to the new version? Did everyone configure security the same? How many hours have you (or your team) spent getting a development environment like that up and running correctly (and consistently) for a particular project? When a new developer or contractor comes onboard how quickly can they get to work? Whew – that’s way too many questions to ask and way too much work to do before you even write a single line of code (and yes…I realize there are many ways to automate some of this but rather than digressing I’ll stay focused on Docker here).

I’ve wasted more time than I care to admit in some cases setting up my dev environment for projects. The good news is that Docker can greatly simplify the process of creating a consistent development environment across multiple team members, remote workers, contractors and more. And – it can do it quickly! There are certainly other benefits as well. When I’m ready to move to staging or production I can move the exact environment (via Docker Images) and feel confident that the application is going to run the same in the other environments.

[](https://blog.codewithdan.com/wp-content/uploads/2017/02/dockerDeveloperFeatures.png)

Once I realized that Docker wasn’t just for SysAdmins (which is certainly an important role as well) and could play a huge role for developers I jumped in head first and have never looked back. Sure, there was a learning curve (after all – I did have to learn some Linux fundamentals) but I’ve quite honestly enjoyed the process every step of the way, appreciate the [documentation](https://docs.docker.com/) that the Docker website offers and enjoy working with the [tools](https://www.docker.com/products/docker-toolbox) provided by Docker such as **Docker Machine**, **Docker Client** and especially **Docker Compose**. I’ll be writing some posts in the near future about these tools and how they can be used so stay tuned. Follow me on Twitter at [@DanWahlin](http://twitter.com/danwahlin) if you’re interested in hearing about the latest posts.

And that brings us to the new course that I just released…

## The New Pluralsight Course: Docker for Web Developers

I’ve been using Docker for quite awhile now and am still super excited about the benefits it offers software developers. In fact, I was so excited about the features that I decided to create a full video course on [Pluralsight.com](https://www.pluralsight.com/courses/docker-web-development) which was recently released. The course has over 5 hours of in-depth information about why and how you’d use Docker in your Web development environment. Here’s a small sampling of some of the topics covered:

- Why use Docker as a Developer?
- The difference between Docker Images and Virtual Machines
- Installing Docker Toolbox on Mac and Windows
- The role of Docker Hub for pulling images
- Getting started quickly and easily with the Kitematic Tool
- Using Docker Machine to work with Linux on Mac and Windows
- Key Docker Machine and Docker Client commands
- How do you hook your source code into Docker?
- How do you build custom Docker images?
- How do multiple containers talk to each other at runtime?
- Bring up a complete development environment with Docker Compose
- Push custom images to Docker Hub
- Using Docker Cloud to deploy your images/containers to a cloud provider
- Much more!

Here’s the official course table of contents…

## Docker for Web Developers Video Course Outline

[](https://blog.codewithdan.com/wp-content/uploads/2017/02/dockerCourseBanner.png)

Docker&apos;s open app-building platform can give any web developer a boost in productivity. You&apos;ll learn how to use the Docker toolbox, how to work with images and containers, how to get your project running in the cloud, and more. [View the course here.](https://www.pluralsight.com/courses/docker-web-development)

1. Course Overview
2. Why Use Docker as a Developer?
3. Setting Up Your Dev Environment with Docker Toolbox
4. Using Docker Machine and Docker Client
5. Hooking Your Source Code into a Container
6. Building Custom Images with Dockerfile
7. Communicating Between Docker Containers
8. Managing Containers with Docker Compose
9. Running Your Containers in the Cloud
10. Reviewing the Case for Docker

I put a lot of blood, sweat, tears, late nights, head banging, head scratching, why won’t this s\*@? work moments, awesome – it works! moments, joy, excitement and more into the course so I really hope you enjoy it! [Get to the full course here!](https://www.pluralsight.com/courses/docker-web-development)</content:encoded></item><item><title>Video: Modern Web Development Interview with Channel 9</title><link>https://blog.codewithdan.com/video-modern-web-development-interview-with-channel-9/</link><guid isPermaLink="true">https://blog.codewithdan.com/video-modern-web-development-interview-with-channel-9/</guid><pubDate>Thu, 14 Jan 2016 00:00:00 GMT</pubDate><content:encoded>I had the privilege to sit down with [Seth Juarez](http://twitter.com/sethjuarez) from [Channel 9](https://channel9.msdn.com/Events/Seth-on-the-Road) at the [AngleBrackets](http://anglebrackets.org) conference in Las Vegas (fall 2015) and talk about modern web development and the main technologies that drive it. We talked about a lot of different topics ranging from TypeScript, Angular and Aurelia on the client-side to Node.js and ASP.NET 5 on the server-side. Seth’s a great interview host (and a super cool guy to hang out with) and I really enjoyed talking with him about modern technology in today’s world. Check out the interview below.

# Modern Web Development with Seth Juarez and Dan Wahlin

&lt;iframe src=&quot;https://channel9.msdn.com/Events/Seth-on-the-Road/DevIntersection-2015/Modern-Web-Development-with-Dan-Wahlin/player?format=html5&quot; width=&quot;800&quot; height=&quot;450&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>Getting Started with ES6 - Using Classes</title><link>https://blog.codewithdan.com/getting-started-with-es6-using-classes/</link><guid isPermaLink="true">https://blog.codewithdan.com/getting-started-with-es6-using-classes/</guid><pubDate>Wed, 05 Aug 2015 00:00:00 GMT</pubDate><content:encoded>In a [previous post](http://weblogs.asp.net/dwahlin/getting-started-with-es6-–-transpiling-es6-to-es5) I introduced how ES6 can be transpiled to ES5 using Traceur or Babel. By using transpilers you can write“modern” code and leverage features found in ES6 today while still allowing the code to run in older browsers. In this post I’m going to dive into classes which is one of the shiny new features found in ES6.

# Getting Started with ES6/ES2015 Classes

Classes are the subject of much debate in the JavaScript world with some people loving them and others hating them. I’ve never believed in one world view being right for every situation and think that classes definitely have their place. If nothing else they’re worth exploring so that you understand the different options provided by ES6. If you want to continue with the more traditional “functional programming” techniques available in JavaScript it’s important to note that you can still do that. Classes are an option provided by ES6 but certainly not required to write ES6 code.

Here’s an example of an ES6 class that defines an automobile:

```
class Auto {
    constructor(data) {
        this.make = data.make;
        this.model = data.model;
        this.year = data.year;
        this.price = data.price;
    }

    getTotal(taxRate) {
        return this.price + (this.price * taxRate);
    }

    getDetails() {
        return `${this.year} ${this.make} ${this.model}`;
    }
}
```

If you’ve worked with languages such as Java, C# or others that support classes the general syntax will probably look familiar. You can see that the **class** keyword is used to define a container named **Auto**. The class contains a **constructor** that is called when the class is created using the **new** keyword (more on “new” later). The constructor provides a place where you can initialize properties with values. In the **Auto** class example above a **data** object is passed into the constructor and its properties are associated with the **make**, **model**, **year** and **price** properties.

The **Auto** class also defines two functions named **getTotal()** and **getDetails()** as well. You’ll quickly notice that the **function** keyword isn’t used to define each function. That’s a new feature available in classes (and in others parts of ES6 such as functions in object literals) that provides a nice, compact way to define functions. Within the **getDetails()** function you’ll also notice another new feature in ES6 – template strings. I’ll provide additional details about this feature in a future post.

The class code shown above is referred to as a “class definition”. Classes can also be defined using a “class expression” syntax as well. An example of a class expression is show next:

```
let Automobile = class {
    constructor(data) {
        this.make = data.make;
        this.model = data.model;
        this.year = data.year;
        this.price = data.price;
    }

    getDetails() {
        return `${this.year} ${this.make} ${this.model}`;
    }
}
```

The **Automobile** variable in this example is assigned to a class expression that has a constructor and a **getDetails()** function. The **let** keyword used in this example creates a local scoped variable. If you put this code inside a block such as an **if** statement or **for** loop the variable’s scope will be limited to the block. That’s another new (and very welcome) feature provided by ES6.

ES6 classes can also contain property get and set blocks if desired. This is done by using  the **get** and **set** keywords:

```
class AutoWithProperties {
    constructor(data) {
        this._make;
        this._model;
    }

    get make() {
        return this._make;
    }
    
    set make(val) {
        if (val) {
            this._make = val;
        } else {
            this._make = &apos;No Make&apos;;
        }
    }
    
    get model() {
        return this._model;
    }
    
    set model(val) {
        if (val) {
            this._model = val;
        } else {
            this._model = &apos;No Model&apos;;
        }
        
    }
}
```

The **AutoWithProperties** class defines two properties named **make** and **model** that read (the get block) and write (the set block) to and from backing properties named \_**make** and \_**model**.

# Creating an Instance of a Class

Once a class is defined you can create an instance using the **new** keyword. For example, an instance of **Auto** can be created using the following code:

```
let auto = new Auto({
   make: &apos;Chevy&apos;,
   model: &apos;Malibu&apos;,
   price: 30000, 
   year: 2014
});
```

What happens if you try to call the **Auto** class as a function without using the **new** keyword? That’s one of the limitations of classes in ES6 and something the anti-class crowd will quickly point out as a flaw. Calling a class as a regular function results in the following error:

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/bbaa0c1a4e61_8A2E/classError_2.png)

This limitation is problematic when using a “functional programming” approach and definitely something to consider before choosing to use classes.

Classes also aren’t hoisted like regular JavaScript functions. If you try to “new up” a class before the class is defined the class definition won’t be hoisted to the top of the code to make that scenario work as with regular JavaScript functions. Instead, you’ll get an error saying that the class is not a function. That’s another limitation of classes that you should be aware of.

# Extending a Class

Extending a function in ES5 requires use of the prototype property. ES6 classes still rely on prototype under the covers, but the syntax is much easier to read and understand. To extend a class you can use the **extends** keyword as shown next:

```
class Car extends Auto {
    constructor(data) {
        super(data);
        this.isElectric = data.isElectric;
        this.isHatchback = data.isHatchback;
    }

    getDetails() {
        return `${super.getDetails()} Electric: ${this.isElectric} Hatchback: ${this.isHatchback}`;
    }
}
```

This example defines a new **Car** class that extends the **Auto** class. The class’s constructor forwards the **data** parameter to the base class by making a call to **super()**.

It’s important to note that a call to **super()** must be made before properties are initialized using “this” (if you return Object.create(null) from the constructor then you don’t actually have to call super() though).  Anytime you extend a class **super()** must be called even if no data is passed to the base class’s constructor. The only exception to this rule is if the base and derived classes do not explicitly define custom constructors. In that case constructors will be implicitly created for both of the classes.

Here&apos;s a fiddle with the Auto and Car code in it that you can play around with:

&lt;iframe width=&quot;100%&quot; height=&quot;300&quot; src=&quot;//jsfiddle.net/dwahlin/o93Lm0rc/embedded/&quot; allowfullscreen=&quot;allowfullscreen&quot; frameborder=&quot;0&quot;&gt;&lt;/iframe&gt;

# Summary

Classes provide a succinct and clean way to encapsulate code that makes JavaScript feel more “object-oriented” even though prototypes are still used behind the scenes. They also provide constructors, support properties, allow functions to be defined without using the **function** keyword and support extension using the **extends** keyword. Developers coming from languages that support classes will feel right at home using the new class feature in ES6.

So should you move all of your code to classes as you migrate to ES6? Developers that prefer “functional programming” will be quick to say NO and avoid them like the plague. Classes don’t fit in well with the functional programming approach. If you understand their limitations (having to use new, no hoisting, etc.) then I think they can work well in some applications although it depends on the type of JavaScript code you like to write. I always say, “Use the right tool for the right job” which means taking the time to learn the ins-and-outs of classes and deciding if their pros outweigh their cons for your target application.

You can play around with classes by using the fiddle above, by using my ES6 starter project available at [https://github.com/DanWahlin/ES6-Modules-Starter](https://github.com/DanWahlin/ES6-Modules-Starter &quot;https://github.com/DanWahlin/ES6-Modules-Starter&quot;) or by visiting the [Babel ES6 playground](https://babeljs.io/repl/#?experimental=true&amp;evaluate=true&amp;loose=false&amp;spec=false&amp;code=class%20Auto%20%7B%0A%20%20%0A%7D).</content:encoded></item><item><title>Video: Building a Single-Page App with Angular, TypeScript, Azure Active Directory and Office 365 APIs</title><link>https://blog.codewithdan.com/video-building-a-single-page-app-with-angular-typescript-azure-active-directory-and-office-365-apis/</link><guid isPermaLink="true">https://blog.codewithdan.com/video-building-a-single-page-app-with-angular-typescript-azure-active-directory-and-office-365-apis/</guid><pubDate>Sat, 02 May 2015 00:00:00 GMT</pubDate><content:encoded>I had the opportunity to speak at the BUILD 2015 conference in San Francisco with my friend [Andrew Connell](http://www.andrewconnell.com/) and had a great time! We gave a talk on Angular, TypeScript, Azure AD and Office 365 APIs and I also gave two “Express” talks on TypeScript. It was super fun to meet new people and hang out with everyone at the conference. I even had the opportunity to check out the upcoming [HoloLens](http://www.microsoft.com/microsoft-hololens/en-us) device which was super cool! I was heading up an elevator to a meeting and a member of the HoloLens team approached me and asked if I had some time to do some “HoloLens Testing”. Let’s just say that I was a bit late for my meeting – I wasn’t going to pass up the opportunity to check out HoloLens in person.

Thanks to everyone who came to the talk. We had a great turnout! If you weren’t able to make it (it was tough to get a ticket to BUILD this year) then you can catch the video below.

[](https://aspblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Building-a-Single-Page-App-Using-Angular_8F86/image_5.png)

 

We had some microphone challenges for the first minute or so of the talk so it looks like they cut out the introduction but everything else is there if you’re interested in watching the talk. I have no idea why they chose the picture below for the video overlay. I’m apparently saluting Angular and TypeScript (with the wrong hand).

## Building a Single-Page App with Angular, TypeScript, Azure AD and Office 365 APIs

&lt;iframe src=&quot;//channel9.msdn.com/Events/Build/2015/3-689/player?format=html5&quot; width=&quot;830&quot; height=&quot;467&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>The Role of Interfaces in TypeScript</title><link>https://blog.codewithdan.com/the-role-of-interfaces-in-typescript/</link><guid isPermaLink="true">https://blog.codewithdan.com/the-role-of-interfaces-in-typescript/</guid><pubDate>Mon, 13 Apr 2015 00:00:00 GMT</pubDate><content:encoded>In my [last post](http://weblogs.asp.net/dwahlin/archive/2013/01/07/extending-classes-and-interfaces-using-typescript.aspx) I talked about how classes and interfaces could be extended in the TypeScript language. By using TypeScript’s **extends** keyword you can easily create derived classes that inherit functionality from a base class. You can also use the extends keyword to extend existing interfaces and create new ones. In the previous post I showed an example of an ITruckOptions interface that extends IAutoOptions. An example of the interfaces is shown next:

```
interface IAutoOptions {
    engine: IEngine;
    basePrice: number;
    state: string;
    make: string;
    model: string;
    year: number;
}

interface ITruckOptions extends IAutoOptions {
    bedLength: string;
    fourByFour: bool;
}
```

I also showed how a class named Engine can implement an interface named IEngine. By having the IEngine interface in an application you can enforce consistency across multiple engine classes.

 

```
interface IEngine {
    start(callback: (startStatus: bool, engineType: string) =&gt; void) : void;
    stop(callback: (stopStatus: bool, engineType: string) =&gt; void) : void;
}

class Engine implements IEngine {
    constructor(public horsePower: number, public engineType: string) { }

    start(callback: (startStatus: bool, engineType: string) =&gt; void) : void {
        window.setTimeout(() =&gt; {
            callback(true, this.engineType);
        }, 1000);
    }

    stop(callback: (stopStatus: bool, engineType: string) =&gt; void) : void {
        window.setTimeout(() =&gt; {
            callback(true, this.engineType);
        }, 1000);
    }
}
```

Although interfaces work well in object-oriented languages, JavaScript doesn’t provide any built-in support for interfaces so what role do they actually play in a TypeScript application? The first answer to that question was discussed earlier and relates to consistency. Classes that implement an interface must implement all of the required members (note that TypeScript interfaces also support optional members as well). This makes it easy to enforce consistency across multiple TypeScript classes. If a class doesn’t implement an interface properly then the TypeScript compiler will throw an error and no JavaScript will be output. This lets you catch issues upfront rather than after the fact which is definitely beneficial and something that simplifies maintenance down the road.

If you look at the JavaScript code that’s generated you won’t see interfaces used at all though – JavaScript simply doesn’t support them. Here’s an example of the JavaScript code generated by the TypeScript compiler for Engine:

 

```
var Engine = (function () {
    function Engine(horsePower, engineType) {
        this.horsePower = horsePower;
        this.engineType = engineType;
    }
    Engine.prototype.start = function (callback) {
        var _this = this;
        window.setTimeout(function () {
            callback(true, _this.engineType);
        }, 1000);
    };
    Engine.prototype.stop = function (callback) {
        var _this = this;
        window.setTimeout(function () {
            callback(true, _this.engineType);
        }, 1000);
    };
    return Engine;
})();
```

 

Looking through the code you’ll see that there’s no reference to the IEngine interface at all which is an important point to understand with TypeScript – interfaces are only used when you’re writing code (the editor can show you errors) and when you compile. They’re not used at all in the generated JavaScript.

In addition to driving consistency across TypeScript classes, interfaces can also be used to ensure proper values are being passed into properties, constructors, or functions. Have you ever passed an object literal (for example { firstName:’John’, lastName:’Doe’}) into a JavaScript function or object constructor only to realize later that you accidentally left out a property that should’ve been included? That’s an easy mistake to make in JavaScript since there’s no indication that you forgot something. With TypeScript that type of problem is easy to solve by adding an interface into the mix.

The Auto class shown next demonstrates how an interface named IAutoOptions can be defined on a constructor parameter. If you pass an object into the constructor that doesn’t satisfy the IAutoOptions interface then you’ll see an error in editors such as Visual Studio and the code won’t compile using the TypeScript compiler.

 

```
class Auto {
    basePrice: number;
    engine: IEngine;
    state: string;
    make: string;
    model: string;
    year: number;

    constructor(options: IAutoOptions) {
        this.engine = options.engine;
        this.basePrice = options.basePrice;
        this.state = options.state;
        this.make = options.make;
        this.model = options.model;
        this.year = options.year;
    }
}
```

An example of using the Auto class’s constructor is shown next. In this example the **year** (a required field in the interface) is missing so the object doesn’t satisfy the IAutoOptions interface.

```
var auto = new Auto({
    engine: new Engine(250, &apos;V8&apos;),
    basePrice: 45000,
    state: &apos;Arizona&apos;,
    make: &apos;Ford&apos;,
    model: &apos;F-150&apos;
});
```

In this example the object being passed into the Auto’s constructor implements 5 out of 6 fields from the IAutoOptions interface. Because the constructor parameter requires 6 fields an error will be displayed in the editor and the TypeScript compiler will error out as well if you try to compile the code to JavaScript. An example of the error displayed in Visual Studio is shown next:

 

[](https://aspblogs.blob.core.windows.net/media/dwahlin/Media/image_1AAD90C6.png) This makes it much easier to catch issues such as missing data while you’re writing the initial code as opposed to catching issues after the fact while trying to run the actual JavaScript. If you write unit tests then this functionality should also help ensure that tests are using proper data.

Interfaces also allow for more loosely coupled applications as well. Looking back at the Auto class code you’ll notice that the engine field is of type IEngine. This allows any object that implements the interface to be passed which provides additional flexibility. The same can be said about the Auto’s constructor parameter since any object that implement IAutoOptions can be passed.

 

## Conclusion

In this post you’ve seen that Interfaces provide a great way to enforce consistency across objects which is useful in a variety of scenarios. In addition to consistency, interfaces can also be used to ensure that proper data is passed to properties, constructors and functions. Finally, interfaces also provide additional flexibility in an application and make it more loosely coupled. Although they never appear in the generated JavaScript, they help to identify issues upfront as you’re building an application and certainly help as modifications are made in the future.

If you’d like to learn more about TypeScript check out the [TypeScript Fundamentals course on Pluralsight.com](http://ow.ly/gEhsJ).</content:encoded></item><item><title>Extending Classes and Interfaces using TypeScript</title><link>https://blog.codewithdan.com/extending-classes-and-interfaces-using-typescript/</link><guid isPermaLink="true">https://blog.codewithdan.com/extending-classes-and-interfaces-using-typescript/</guid><pubDate>Fri, 10 Apr 2015 00:00:00 GMT</pubDate><content:encoded>![image](/images/blog/extending-classes-and-interfaces-using-typescript/image_thumb_17649611.webp)In a [previous post](http://weblogs.asp.net/dwahlin/archive/2012/10/31/getting-started-with-typescript-classes-static-types-and-interfaces.aspx) I discussed the fundamentals of the TypeScript language and how it can be used to build JavaScript applications. TypeScript is all about strongly-typed variables and function parameters, encapsulation of code, and catching issues upfront as opposed to after the fact to provide more maintainable code bases. One of the great features it offers is the ability to take advantage of inheritance without having to be an expert in JavaScript prototypes, constructors, and other language features (although I certainly recommend learning about those features regardless if you use TypeScript or not).

In this post I’ll discuss how classes and interfaces can be extended using TypeScript and the resulting JavaScript that’s generated. Let’s jump in!  
  
  

# Extending Classes and Interfaces  

Let’s assume that we have a TypeScript class named **Auto** that has the following code in it:  
  

```
class Auto {
    private _basePrice: number;
    engine: IEngine;
    state: string;
    make: string;
    model: string;
    year: number;
    accessoryList: string;

    constructor(options: IAutoOptions) {
        this.engine = options.engine;
        this.basePrice = options.basePrice;
        this.state = options.state;
        this.make = options.make;
        this.model = options.model;
        this.year = options.year;
    }

    calculateTotal() : number {
        var taxRate = TaxRateInfo.getTaxRate(this.state);
        return this.basePrice + (taxRate.rate * this.basePrice);
    }

    addAccessories(...accessories: Accessory[]) {
        this.accessoryList = &apos;&apos;;
        for (var i = 0; i &lt; accessories.length; i++) {
            var ac = accessories[i];
            this.accessoryList += ac.accessoryNumber + &apos; &apos; + ac.title + &apos;&lt;br /&gt;&apos;;
        }
    }

    getAccessoryList(): string {
        return this.accessoryList;
    }

    get basePrice(): number {
        return this._basePrice;
    }

    set basePrice(value: number) {
        if (value &lt;= 0) throw &apos;price must be &gt;= 0&apos;;
        this._basePrice = value;
    }
} 
```

  
  
Looking through the code you can see that the class has several members including fields, a constructor, functions (including a function that accepts a special type of … parameter referred to as a rest parameter), and the get and set blocks for a property named basePrice. Although unrelated to inheritance, it’s important to note that properties in TypeScript only work when setting the TypeScript compilation target to ECMAScript 5 using the --target switch (for example:  tsc.exe --target ES5 YourFile.ts).

The engine field in the Auto class accepts any type that implements a TypeScript interface named IEngine and the constructor accepts any object that implements an IAutoOptions interface. Both of these interfaces are shown next:  
  
  

```
interface IEngine {
    start(callback: (startStatus: boolean, engineType: string) =&gt; void) : void;
    stop(callback: (stopStatus: boolean, engineType: string) =&gt; void) : void;
}

interface IAutoOptions {
    engine: IEngine;
    basePrice: number;
    state: string;
    make: string;
    model: string;
    year: number;
}
```

  
The start() and stop() functions in the IEngine interface both accept a callback function. The callback function must accept two parameters of type boolean and string. An example of implementing the IEngine interface using TypeScript is shown next. A class that implements an interface must define all members of the interface unless the members are marked as optional using the ? operator.  
  
  

```
class Engine implements IEngine {
    constructor(public horsePower: number, public engineType: string) { }

    start(callback: (startStatus: boolean, engineType: string) =&gt; void) : void{
        window.setTimeout(() =&gt; {
            callback(true, this.engineType);
        }, 1000);
    }

    stop(callback: (stopStatus: boolean, engineType: string) =&gt; void) : void{
        window.setTimeout(() =&gt; {
            callback(true, this.engineType);
        }, 1000);
    }
}
```

  
It goes without saying that if we wanted to create a Truck class that extends the Auto class we wouldn’t want to cut-and-paste the code from Auto into Truck since that would lead to a maintenance headache down the road. Fortunately, TypeScript allows us to take advantage of inheritance to re-use the code in Auto. An example of a Truck class that extends the Auto class using the TypeScript **extends** keyword is shown next:  
  
  

```
class Truck extends Auto {
    private _bedLength: string;
    fourByFour: boolean;

    constructor(options: ITruckOptions) {
        super(options);
        this.bedLength = options.bedLength;
        this.fourByFour = options.fourByFour;
    }

    get bedLength(): string {
        return this._bedLength;
    }

    set bedLength(value: string) {
        if (value == null || value == undefined || value == &apos;&apos;) {
            this._bedLength = &apos;Short&apos;;
        }
        else {
            this._bedLength = value;
        }
    }
}
```

  

The Truck class extends Auto by adding bedLength and fourByFour capabilities. The constructor also accepts an object that implements the ITruckOptions interface which in turn extends the IAutoOptions interface shown earlier. Notice that interfaces can also be extended in TypeScript by using the **extends** keyword:  

```
interface ITruckOptions extends IAutoOptions {
    bedLength: string;
    fourByFour: boolean;
}
```

  
Here’s an example of creating a new instance of the Truck class and passing an object that implements the ITruckOptions interface into its constructor:  
  
  

```
var truck = new Truck({
    engine: new Engine(250, &apos;V8&apos;),
    basePrice: 45000,
    state: &apos;Arizona&apos;,
    make: &apos;Ford&apos;,
    model: &apos;F-150&apos;,
    year: 2013,
    bedLength: &apos;Short Bed&apos;,
    fourByFour: true
});
```

#   
  
Inheritance in JavaScript  

  
You can see that the TypeScript **extends** keyword provides a simple and convenient way to inherit functionality from a base class (or extend an interface) but what happens behind the scenes once the code is compiled into JavaScript? After all, JavaScript doesn’t have an **extends** or **inherits** keyword in the language - at least not in ECMAScript 5 or earlier. If you look at the JavaScript code that’s output by the TypeScript compiler you’ll see that a little magic is added to simulate inheritance in JavaScript using prototyping.

First, a variable named \_\_extends is added into the generated JavaScript and it is assigned to a function that accepts two parameters as shown next:  
  
  

```
var __extends = this.__extends || function (d, b) {
    for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
    function __() { this.constructor = d; }
    __.prototype = b.prototype;
    d.prototype = new __();
};
```

  
  
The function accepts the derived/child type (the d parameter) and the base type (the b parameter).  Inside of the function an object named \_\_ is created (definitely a strange name) and the derived type is assigned to it’s constructor. From there, the base type’s prototype is assigned to the \_\_ object’s prototype. To finish things up a new instance of the \_\_ object is created and assigned to the derived type’s prototype so it picks up prototype members from the base type. In the end, this little function provides a re-useable way to handle inheritance between two objects in JavaScript. If you’re new to prototypes then you’re probably appreciating the simplicity provided by the TypeScript **extends** keyword!

The \_\_extends function is used later in the generated JavaScript code to handle inheritance between Truck and Auto. An example of the code that’s generated to represent the Truck class is shown next:  
  
  

```
var Truck = (function (_super) {
    __extends(Truck, _super);
    function Truck(options) {
        _super.call(this, options);
        this.bedLength = options.bedLength;
        this.fourByFour = options.fourByFour;
    }
    Object.defineProperty(Truck.prototype, &quot;bedLength&quot;, {
        get: function () {
            return this._bedLength;
        },
        set: function (value) {
            if(value == null || value == undefined || value == &apos;&apos;) {
                this._bedLength = &apos;Short&apos;;
            } else {
                this._bedLength = value;
            }
        },
        enumerable: true,
        configurable: true
    });
    return Truck;
})(Auto);
```

  
Notice that the Truck variable is assigned to a function that accepts a parameter named \_super. This parameter represents the base class to inherit functionality from. The function assigned to Truck is self-invoked at the bottom of the code and the base class to derive from (Auto in this example) is passed in for the value of the \_super parameter. The \_\_extends function discussed earlier is then called inside of the Truck function and the derived type (Truck) and base type (Auto) are passed in as parameters. The magic of inheritance then happens using prototypes as discussed earlier.  
  
  

# Conclusion  

In this post you’ve seen how TypeScript can be used to create an inheritance hierarchy and the resulting JavaScript that’s generated. You’ve also seen how interfaces can be created, implemented, and even extended using TypeScript.

Check out the [TypeScript Fundamentals course](http://pluralsight.com/training/Courses/TableOfContents/typescript) on [Pluralsight.com](http://www.pluralsight.com)  and see what TypeScript offers for both large-scale and small-scale JavaScript applications.</content:encoded></item><item><title>Introducing the AngularU Conference</title><link>https://blog.codewithdan.com/introducing-the-angularu-conference/</link><guid isPermaLink="true">https://blog.codewithdan.com/introducing-the-angularu-conference/</guid><pubDate>Sun, 29 Mar 2015 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/introducing-the-angularu-conference/angularsticker_300x300.png)In late 2014 my good friend [Peter Kellner](https://twitter.com/pkellner) approach me with a big idea – an idea that immediately caught my attention. He wanted to explore collaborating on a new conference idea that would highlight [Angular](http://angular.io) and other “hot” Web development topics and wondered if I’d be interested in working with him and another friend of his named [Kevin Nilson](https://twitter.com/javaclimber) (who is also a good friend now) on a full-scale conference. I’ve been involved with chairing a lot of conference tracks over the years but never been the driving force behind organizing an entire conference. While the idea was definitely exciting it was also a little scary – at least initially.

After talking more with Peter and Kevin I decided to jump in and we’ve been working hard to put on a top-notch conference called [AngularU](http://angularu.com) (short for “Angular University”) that will be held in **San Francisco** from **June 22 – 25, 2015**. I’m really excited about the conference and thought I’d discuss some of the key reasons in this post.

## The Web is Changing Rapidly

It’s no secret that the web is changing at a rapid pace. In fact “rapid” may be an understatement since in the time I’ve been writing this post I suspect several new JavaScript or CSS libraries have been released on Github. Keeping up with the pace of Web technologies can be a challenging proposition to say the least although for me personally I think it’s all part of the fun of working in this space. In my training classes I always tell people that, “If you’re getting bored you’re just not trying hard enough” since there’s so much to learn.

When Peter and I first discussed the concept of doing an Angular conference I thought about it more and realized that while focusing on Angular would be good, several other conferences were already doing that exact thing really well such as [ng-conf](http://www.ng-conf.org/) (friends from the ng-conf event such as [Aaron Frost](https://twitter.com/js_dev) and [Joe Eames](https://twitter.com/josepheames) have even taken the time to provide us with a lot of excellent feedback).  I took a step back and thought about what web technologies I’d like to see included if I could create my own conference given the rapid pace of the web in general. After thinking about it more I decided that I’d want a conference that had Angular sessions (including Angular 2!) as well as sessions on TypeScript, ES6, and Web Components. I suggested that general concept to Peter and we agreed that providing more broad coverage of web technologies would be the overall goal of AngularU. As a result, the [AngularU conference](http:/angularu.com) now has many talks lined up to discuss TypeScript, ES6, Web Components, CSS and more which should help attendees get a good understanding about the latest and greatest technologies out there.

 

## Top-Notch Experts and Speakers

There are a lot of conferences out there and many claim to have the “best speakers in the industry”. If I was going to be involved with a conference then I definitely wanted some of the top names and key players in the industry to be involved. Peter, Kevin and I decided that we wanted the Google team to give the keynote on [Angular 2](http://angular.io). Fortunately, we had a meeting with [Brad Green](https://twitter.com/bradlygreen) and [Igor Minar](https://twitter.com/igorminar) from the Angular team and they agreed to give the keynote. We’re really excited that Brad, Igor, and [Misko Hevery](https://twitter.com/mhevery) have agreed to come speak! It’ll be great to hear about the latest updates on Angular 2 in June.

[](https://aspblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Introducing-the-Angular-U-Conference_A554/image_2.png)

 

[](https://aspblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Introducing-the-Angular-U-Conference_A554/image_4.png) [](https://aspblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Introducing-the-Angular-U-Conference_A554/image_6.png)We didn’t stop there. With the big announcement about [TypeScript](http://typescriptlang.org) and [Angular](http://angular.io) we also wanted a top-notch person involved with TypeScript to come speak. As a result, the day 2 keynote at AngularU will be by [Jonathan Turner](https://twitter.com/jntrnr), the program manager on the TypeScript team. We’re super excited to have Jonathan involved with the conference as well. We also wanted someone very well known in the JavaScript community to be involved who’s had a tremendous influence over the years on the JavaScript code most of us write today. We’re very excited to have [Douglas Crockford](http://javascript.crockford.com/) coming as well to give a day 1 afternoon keynote to discuss his latest findings in the world of JavaScript!

In addition to the keynote speakers we also have many additional great speakers and experts in the industry such as [John Papa](https://twitter.com/john_papa) (he and I will be giving a talk after the day 1 keynote), [John Lindquist](https://twitter.com/johnlindquist), [Lukas Ruebbelke](https://twitter.com/simpulton) and many more. Check out the [http://angularu.com](http://angularu.com) site for a list of speakers (more are being added as we finalize talk selections).

[](https://aspblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Introducing-the-Angular-U-Conference_A554/image_8.png)

## AngularU Sessions

So what are some of the topics speakers will be covering at AngularU? Here are a few of the talks:

- Angular 2 Keynote
- TypeScript Keynote
- JavaScript/ES6 Keynote
- Migrating to Angular 2
- Crazy Fast Prototyping with Angular
- Modular Angular: Building Reusable Components Today
- ES6 with Angular Today
- Web Components and the Shadow DOM
- Creating D3 components with Angular 2 and TypeScript
- Angular 2 Server Rendering
- Angular 2 Core Concepts
- Foundation for Apps: Integrating Angular with Responsive Web Apps
- Many more!

Check out the [AngularU website](http://angularu.com) to see a complete list of speakers and talks. We’ll be updating it with new talks and speakers in early April so check back often for the latest.

## Have Fun While Learning and Networking

[](https://aspblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Introducing-the-Angular-U-Conference_A554/image_10.png)People learn best while they’re having fun and we plan to make AngularU a fun yet educational event. From the AngularU game room to the lunch and attendee party events, we have a lot of great stuff planned that will help people meet and get a chance to discuss all of the things happening with web development. In addition to the events planned for attendees, we also have a community night planned that anyone from the San Francisco area can attend and have other events to give everyone a chance to talk with the speakers and other experts who will be attending.

 

## Come Join Us at AngularU in San Francisco!

We’ve come a long way from the initial call that Peter and I had about potentially doing a conference and are super excited about all of the great speakers and content that will be at the event! We hope that you’ll come join in the fun and learning at AngularU!

[AngularU](http://angularu.com) will run from June 22nd – 23rd at the Hyatt Regency San Francisco Airport hotel and conference center. Post-conference workshops will run on June 24th – 25th at the same location.

 

[](https://aspblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Introducing-the-Angular-U-Conference_A554/image_14.png)</content:encoded></item><item><title>Creating a TypeScript Workflow with Gulp</title><link>https://blog.codewithdan.com/creating-a-typescript-workflow-with-gulp/</link><guid isPermaLink="true">https://blog.codewithdan.com/creating-a-typescript-workflow-with-gulp/</guid><pubDate>Mon, 23 Mar 2015 00:00:00 GMT</pubDate><content:encoded>TypeScript provides a lot of great functionality that lets you leverage many of the features available in ES6 today but how do you get started using it in your favorite editor? If you’re using Visual Studio or WebStorm then TypeScript support can be used directly and everything happens magically without much work on your part. But, if you’re using Sublime Text, Brackets, Atom, or another editor you’ll have to find a plugin to compile .ts files to JavaScript or create your own custom workflow.

While several plugins exist to compile TypeScript and even provide code help as you’re writing TypeScript code in different editors, I generally prefer to use my own custom workflow. There are multiple benefits associated with going with this approach including the ability to standardize and share tasks across team members as well as being able to tie the workflow into a custom build process used in continuous integration scenarios. In this post I’ll walk through the process of creating a custom TypeScript workflow using [Gulp](http://gulpjs.com/) (a JavaScript task manager). It’s a workflow setup that my friend [Andrew Connell](https://twitter.com/andrewconnell) and I created when we recently converted an [application](https://github.com/DanWahlin/AngularTypeScript) to TypeScript. Throughout the post you’ll learn how to setup a file named gulpfile.js to compile TypeScript to JavaScript and also see how you can “lint” your TypeScript code to make sure it’s as clean and tidy as possible.

## Getting Started Creating a Custom TypeScript Workflow

I talked about using Gulp to automate the process of transpiling ES6 to ES5 in a [previous post](http://weblogs.asp.net/dwahlin/getting-started-with-es6-%E2%80%93-transpiling-es6-to-es5). The general process shown there is going to be used here as well although I’ll be providing additional details related to TypeScript. If you’re new to Gulp, it’s a JavaScript task manager that can be used to compile .ts files to .js files, lint your TypeScript, minify and concatenate scripts, and much more. You can find additional details at [http://gulpjs.com](http://gulpjs.com/).

Here’s a step-by-step walk-through that shows how to get started creating a TypeScript workflow with Gulp. Although there are several steps to perform, it’s a one-time setup that can be re-used across projects. If you’d prefer to use a starter project rather than walking through the steps that are provided in this post then see the project at [https://github.com/DanWahlin/AngularIn20TypeScript](https://github.com/DanWahlin/AngularIn20TypeScript) or download the project associated with the exact steps shown in this post [here](https://dl.dropboxusercontent.com/u/6037348/TypeScript/typescriptGulpWorkflow.zip).

### Creating the Application Folders and Files

1. Create a new folder where your project code will live. You can name it anything you’d like but I’ll call it **typescriptDemo** in this post.
2. Create the following folders inside of **typescriptDemo**:
    - **src**
    - **src/app**
    - **src/js**
3. Open a command-prompt in the root of the **typescriptDemo** folder and run the following **npm** command (you’ll need to have [Node.js](http://nodejs.org/) installed) to create a file named **package.json**.

**npm init**

 

- Answer the questions that are asked. For this example you can go with all of the defaults it provides. After completing the wizard a new file named **package.json** will be added to the root of the folder.
- Create the following files in the **typescriptDemo** folder:

 

- **gulpfile.js**
- **gulpfile.config.js**
- t**slint.json**

 

### Installing Gulp, Gulp Modules and TSD[](https://aspblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/1aaa3d08d14c_14AAE/image_2.png)

1. Now let’s get **Gulp** installed globally on your machine. Open a command-prompt and run the following command:

**npm install gulp –g**

 

- Open **package.json** and add the following **devDependencies** property into it. The location of the property in the file doesn’t really matter but I normally put it at the bottom. A sample **package.json** file with the dependencies already in it can be found at [https://github.com/DanWahlin/AngularIn20TypeScript](https://github.com/DanWahlin/AngularIn20TypeScript).

 

**Note:** The module versions shown here will certainly change over time. You can visit [http://npmjs.org](http://npmjs.org) to find the latest version of a given module.

```
&quot;devDependencies&quot;: { 
    &quot;gulp&quot;: &quot;^3.8.11&quot;, 
    &quot;gulp-debug&quot;: &quot;^2.0.1&quot;, 
    &quot;gulp-inject&quot;: &quot;^1.2.0&quot;, 
    &quot;gulp-sourcemaps&quot;: &quot;^1.5.1&quot;, 
    &quot;gulp-tslint&quot;: &quot;^1.4.4&quot;, 
    &quot;gulp-typescript&quot;: &quot;^2.5.0&quot;, 
    &quot;gulp-rimraf&quot;: &quot;^0.1.1&quot; 
}
```

 

- Ensure that your command window path is at the root of the **typescriptDemo** folder and run the following command to install the dependencies:**npm install**
- The [http://definitelytyped.org](http://definitelytyped.org) site provides a Node.js module named **tsd** that can be used to install TypeScript type definition files that are used to provide enhanced code help in various editors. Install the **tsd** module globally by running the following command:**npm install tsd@next -g**
- Run the following command:**tsd init**
- Open the tsd.json file that is generated in the root of **typescriptDemo** and change the following properties to include “tools” in the path as shown next:**&quot;path&quot;: &quot;tools/typings&quot;,** **&quot;bundle&quot;: &quot;tools/typings/tsd.d.ts&quot;** 
- Let’s use **tsd** to install a TypeScript definition file for Angular (an angular.d.ts file) and update the **tsd.json** file with the Angular file details as well. Run the following command:

 

**tsd install angular --save**

**Note:** You can install additional type definition files for other JavaScript libraries/frameworks by running the same command but changing the name from “angular” to the appropriate library/framework. See [http://definitelytyped.org/tsd](http://definitelytyped.org/tsd) for a list of the type definition files that are available.

 

- Let’s now install the jQuery type definition as well since the Angular type definition file has a dependency on it:**tsd install jquery --save**
- If you look in the **typescriptDemo** folder you’ll see a new folder is created named **tools**. Inside of this folder you’ll find a file named **typings** that has an **angular/angular.d.ts** type definition file and a **jquery/jquery.d.ts** file in it. You’ll also see a file named **tsd.json**.
- Create a file named **typescriptApp.d.ts** in the **typescriptDemo/tools/typings** folder. This file will track all of the TypeScript files within the application to simplify the process of resolving dependencies and compiling TypeScript to JavaScript.
- Add the following into the **typescriptApp.d.ts** file and save it (the comments are required for one of the Gulp tasks to work properly):

 

&gt; **//{**
&gt; 
&gt;  
&gt; 
&gt; **//}**

### Creating Gulp Tasks

1. Open [https://github.com/DanWahlin/AngularIn20TypeScript/blob/master/gulpfile.config.js](https://github.com/DanWahlin/AngularIn20TypeScript/blob/master/gulpfile.config.js) in your browser and copy the contents of the file into your empty **gulpfile.config.js** file. This file sets up paths that will be used when performing various tasks such as compiling TypeScript to JavaScript.
2. Open [https://github.com/DanWahlin/AngularIn20TypeScript/blob/master/gulpfile.js](https://github.com/DanWahlin/AngularIn20TypeScript/blob/master/gulpfile.js) in your browser and copy the contents of the file into your empty **gulpfile.js** file. This creates the following Gulp tasks:**gen-ts-refs**: Adds all of your TypeScript file paths into a file named **typescriptApp.d.ts**. This file will be used to support code help in some editors as well as aid with compilation of TypeScript files.

**ts-lint**: Runs a “linting” task to ensure that your code follows specific guidelines defined in the tsline.js file.

**compile-ts:** Compiles TypeScript to JavaScript and generates source map files used for debugging TypeScript code in browsers such as Chrome.

**clean-ts:** Used to remove all generated JavaScript files and source map files.

**watch:** Watches the folder where your TypeScript code lives and triggers the ts-lint, compile-ts, and gen-ts-refs tasks as files changes are detected.

**default:** The default Grunt task that will trigger the other tasks to run. This task can be run by typing **gulp** at the command-line when you’re within the **typescriptDemo** folder.

1. Open [https://github.com/DanWahlin/AngularIn20TypeScript/blob/master/tslint.json](https://github.com/DanWahlin/AngularIn20TypeScript/blob/master/tslint.json) in your browser and copy the contents of the file into your empty **tslint.js** file. This has the “linting” guidelines that will be applied to your code. You’ll more than likely want to tweak some of the settings in the file depending on your coding style.

### Compiling TypeScript to JavaScript

1. Now that the necessary files are in place (whew!), let’s add a test TypeScript file into the application folder and try to compile it to JavaScript. Create a file named **customer.ts** in the **typescriptDemo/src/app** folder.
2. Add the following code into the **customer.ts** file:
    
    ```
    class Customer {
        name: string;
    
        constructor(name: string) {
            this.name = name;
        }
    
        getName() {
            return this.name;
        }
    }
    
    ```
    
     
3. Run the following command in your command window (run it from the root of the **typescriptDemo** folder):**gulp**
4. You should see output that shows that the tasks have successfully completed.
5. Open the **src/js** folder and you should that two new files named **customer.js** and **customer.js.map** are now there.
6. Go back to **customer.ts** and change the case of the **Customer** class to **customer**. Save the file and notice that the gulp tasks have run in the command window. You should see a **tslint** error saying that the case of the class is wrong.
7. Your Gulp/TypeScript workflow is now ready to go.

## Conclusion

In this post you’ve seen the steps required to create a custom TypeScript workflow using the Gulp JavaScript task runner. Although you may certainly want to tweak some of the settings and tasks, the steps shown here should help get you started using TypeScript in your applications.

In case you missed it earlier in the post, a project that has all of the steps already completed can be found at [https://github.com/DanWahlin/AngularIn20TypeScript](https://github.com/DanWahlin/AngularIn20TypeScript). You can also find the exact setup discussed in this post [here](https://dl.dropboxusercontent.com/u/6037348/TypeScript/typescriptGulpWorkflow.zip) (just run **npm install** to get the required modules for the project).</content:encoded></item><item><title>Getting Started with TypeScript – Classes, Types and Interfaces</title><link>https://blog.codewithdan.com/getting-started-with-typescript-classes-types-and-interfaces/</link><guid isPermaLink="true">https://blog.codewithdan.com/getting-started-with-typescript-classes-types-and-interfaces/</guid><pubDate>Sun, 08 Mar 2015 00:00:00 GMT</pubDate><content:encoded>One of the big announcements at [ng-conf](http://ng-conf.org &quot;ng-conf&quot;) this week was the collaborative work that the Angular and TypeScript teams have been doing. Angular 2 will leverage TypeScript heavily and you can as well in any type of JavaScript application (client-side or even server-side with Node.js). You can also use ES6 or ES5 with Angular 2 if you decide that TypeScript isn&apos;t for you. [Andrew Connell](http://twitter.com/andrewconnell &quot;Andrew Connell&quot;) and [I](http://twitter.com/danwahlin &quot;Dan Wahlin&quot;) gave a talk on TypeScript at ng-conf that you can view here if interested:

&lt;iframe src=&quot;https://www.youtube.com/embed/U7NYTKgkZgo?list=PLOETEcp3DkCoNnlhE-7fovYvqwVPrRiY7&quot; width=&quot;560&quot; height=&quot;315&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

I&apos;ve been a big fan of TypeScript for many years and decided to update a few previous posts I&apos;ve done to help people get started with it. In this first post I&apos;ll talk about the basics of TypeScript and provide additional details about the language in future posts.

#### TypeScript Overview

Here’s what the TypeScript site ([http://typescriptlang.org](http://typescriptlang.org)) says about TypeScript:

# TypeScript lets you write JavaScript the way you really want to.  
TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source.

TypeScript was created by Anders Hejlsberg (the creator of the C# language) and his team at Microsoft. To sum it up, TypeScript is a language based on ES6 standards that can be compiled to JavaScript. It isn’t a stand-alone language that’s completely separate from JavaScript’s roots though. It’s a superset of JavaScript which means that standard JavaScript code can be placed in a TypeScript file (a file with a .ts extension) and used directly. That’s a very important point/feature of the language since it means you can use existing JavaScript code and frameworks with TypeScript without having to do major code conversions to make it all work. Once a TypeScript file is saved it can be compiled to JavaScript using TypeScript’s tsc.exe compiler tool, by using a variety of editors/tools, or by using JavaScript task runners such as Grunt or Gulp.

TypeScript offers several key features. First, it provides built-in static type support meaning that you define variables and function parameters as being “string”, “number”, “boolean”, and more to avoid incorrect types being assigned to variables or passed to functions. Second, TypeScript provides a way to write modular code by directly supporting ES6 class and module definitions and it even provides support for custom interfaces that can be used to drive consistency. Finally, TypeScript integrates with several different tools such as Brackets, Sublime Text, Emacs, Visual Studio, and Vi to provide syntax highlighting, code help, build support, and more depending on the editor. Find out more about editor support at [http://www.typescriptlang.org/#Download](http://www.typescriptlang.org/#Download). In addition to all of this, TypeScript supports much of ES6 (with more and more ES6 features being added in each release) and also includes support for concepts such as generics (code templates), interfaces, and more.

TypeScript can also be used with existing JavaScript frameworks/libraries such as Angular, Node.js, jQuery, and others and even catch type issues and provide enhanced code help as you build your apps. Special “declaration” files that have a _d.ts_ extension are available for over 300 frameworks/libaries and with the latest announcement at ng-conf by the Angular and TypeScript teams I fully expect that number to go even higher. Visit [http://definitelytyped.org](http://definitelytyped.org) to access the different declaration files that can be used with tools to provide additional code help and ensure that a string isn’t passed to a parameter that expects a number. Although declaration files aren’t required, TypeScript’s support for declaration files makes it easier to catch issues upfront while working with existing libraries such as Angular and jQuery.

#### Getting Started with TypeScript

To get started learning TypeScript visit the TypeScript Playground available at [http://www.typescriptlang.org](http://www.typescriptlang.org). Using the playground editor you can experiment with TypeScript code, get code help as you type, and see the JavaScript that TypeScript generates once it’s compiled. Here’s an example of the TypeScript playground in action:

[](https://aspblogs.blob.core.windows.net/media/dwahlin/Media/Figure1_42D258B4.png)

 

One of the first things that may stand out to you about the code shown above is that classes can be defined in TypeScript. This makes it easy to group related variables and functions into a container which helps tremendously with re-use and maintainability especially in enterprise-scale JavaScript applications. While you can certainly simulate classes using JavaScript patterns (note that ECMAScript 6 will support classes directly), TypeScript makes it quite easy especially if you come from an object-oriented programming background. An example of the Greeter class shown in the TypeScript Playground is shown next:

```
class Greeter {
    greeting: string;

    constructor (message: string) {
        this.greeting = message;
    }

    greet() {
        return &quot;Hello, &quot; + this.greeting;
    }
}
```

Looking through the code you’ll notice that types can be defined on variables and parameters such as _greeting: string_, that constructors can be defined, and that functions can be defined such as _greet()_. The ability to define types is a key feature of TypeScript (and where its name comes from) that can help identify bugs upfront before even running the code. Many types are supported including primitive types like string, number, boolean, array, and null as well as object literals and more complex types such as HTMLInputElement (for an &lt;input&gt; tag). Custom types can be defined as well.

The JavaScript output by compiling the TypeScript Greeter class is shown next:

```
var Greeter = (function () {
    function Greeter(message) {
        this.greeting = message;
    }
    Greeter.prototype.greet = function () {
        return &quot;Hello, &quot; + this.greeting;
    };
    return Greeter;
})();
```

Notice that the code is using JavaScript prototyping and closures to simulate a Greeter class in JavaScript. The body of the code is wrapped with a self-invoking function to take the variables and functions out of the global JavaScript scope. This is important feature that helps avoid naming collisions between variables and functions.

In cases where you’d like to wrap a class in a naming container (similar to a namespace or package in other languages) you can use TypeScript’s _module_ keyword. The following code shows an example of wrapping an AcmeCorp module around the Greeter class. In order to create a new instance of Greeter the module name must now be used. This can help avoid naming collisions that may occur with the Greeter class.

 

```
module AcmeCorp {
    export class Greeter {
        greeting: string;

        constructor (message: string) {
            this.greeting = message;
        }

        greet() {
            return &quot;Hello, &quot; + this.greeting;
        }
    }
}

var greeter = new AcmeCorp.Greeter(&quot;world&quot;);
```

In addition to being able to define custom classes and modules in TypeScript, you can also take advantage of inheritance by using TypeScript’s _extends_ keyword. The following code shows an example of using inheritance to define two report objects:

 

```
class Report {
    name: string;

    constructor (name: string) {
        this.name = name;
    }

    print() {
        alert(&quot;Report: &quot; + this.name);
    }
}

class FinanceReport extends Report {
    constructor (name: string) {
        super(name);
    }

    print() {
        alert(&quot;Finance Report: &quot; + this.name);
    }

    getLineItems() {
        alert(&quot;5 line items&quot;);
    }
}

var report = new FinanceReport(&quot;Month&apos;s Sales&quot;);
report.print();
report.getLineItems();
```

 

In this example a base Report class is defined that has a variable (name), a constructor that accepts a name parameter of type string, and a function named print(). The FinanceReport class inherits from Report by using TypeScript’s _extends_ keyword. As a result, it automatically has access to the print() function in the base class. In this example the FinanceReport overrides the base class’s print() method and adds its own. The FinanceReport class also forwards the name value it receives in the constructor to the base class using the _super()_ call.

TypeScript also supports the creation of custom interfaces when you need to provide consistency across a set of objects and ensure that proper types are used. The following code shows an example of an interface named _Thing_ (from the TypeScript samples) and a class named _Plane_ that implements the interface to drive consistency across the app. Notice that the Plane class includes intersect and normal as a result of implementing the interface.

 

```
interface Thing {
    intersect: (ray: Ray) =&gt; Intersection;
    normal: (pos: Vector) =&gt; Vector;
    surface: Surface;
}

class Plane implements Thing {
    normal: (pos: Vector) =&gt;Vector;

    intersect: (ray: Ray) =&gt;Intersection;

    constructor (norm: Vector, offset: number, public surface: Surface) {
        this.normal = function (pos: Vector) { return norm; }
        this.intersect = function (ray: Ray): Intersection {
            var denom = Vector.dot(norm, ray.dir);
            if (denom &gt; 0) {
                return null;
            } else {
                var dist = (Vector.dot(norm, ray.start) + offset) / (-denom);
                return { thing: this, ray: ray, dist: dist };
            }
        }
    }
}
```

 

At first glance it doesn’t appear that the surface member is implemented in Plane but it’s actually included automatically due to the _public surface: Surface_ parameter in the constructor. Adding _public varName: Type_ to a constructor automatically adds a typed variable into the class without having to explicitly write the code as with _normal_ and _intersect_.

TypeScript has many additional language features but defining types and creating classes, modules, and interfaces are some of the key features it offers. I&apos;ll be covering additional features in future posts so stay tuned. Subscribe to my Web Weekly newsletter (at the top of the blog or below) to stay up-to-date on all of the latest technology that I find and write about.</content:encoded></item><item><title>The AngularJS Custom Directives Video Training Course Has Been Released!</title><link>https://blog.codewithdan.com/the-angularjs-custom-directives-video-training-course-has-been-released/</link><guid isPermaLink="true">https://blog.codewithdan.com/the-angularjs-custom-directives-video-training-course-has-been-released/</guid><pubDate>Sun, 22 Feb 2015 00:00:00 GMT</pubDate><content:encoded>[](https://www.udemy.com/angularjs-custom-directives/)

 

I’m excited to announce that my new [AngularJS Custom Directives](https://www.udemy.com/angularjs-custom-directives/?couponCode=ajsd-29bucks#/) video training course has been released on Udemy.com! If you’ve been wanting to dive deeper into AngularJS directives and understand how they work while also clarifying terms such as isolate scope, transclusion, linking, and much more then this is the course for you. If you enjoyed my [AngularJS JumpStart](http://tinyurl.com/AngularJSJumpStart) course or my [AngularJS in 60ish Minutes](http://weblogs.asp.net/dwahlin/video-tutorial-angularjs-fundamentals-in-60-ish-minutes) video on YouTube then I guarantee you’ll love this course.

The first 2 modules of the course are available to [view absolutely free](https://www.udemy.com/angularjs-custom-directives/?couponCode=ajsd-29bucks#/) so that you can check it out. Here’s an additional free video from the Isolate Scope section in Module 3 of the course.

## Understanding Shared and Isolate Scope

&lt;iframe src=&quot;https://www.youtube.com/embed/P4JnEqlnLnE&quot; width=&quot;640&quot; height=&quot;480&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;allowfullscreen&quot;&gt;&lt;/iframe&gt;

## Get a Huge Discount or Win a FREE Copy!

Use [this link to get a huge discount on the course](https://www.udemy.com/angularjs-custom-directives/?couponCode=ajsd-29bucks)! The course includes over 4+ hours of videos and tons of source code that you can follow along with during the videos. If you’re interested in a chance to win a free copy of the course select one of the following options or do both to be entered twice!

1. Sign-up for the Web Weekly newsletter (sign-up at the top of my blog). I’ll randomly pick winners from the subscribers.
2. Follow [@DanWahlin](https://twitter.com/danwahlin) on Twitter and click the tweet button below to tweet a message about the AngularJS Custom Directives course using hash tag #angularjsdirectives. I’ll randomly pick winners from the list of tweets that use the hash tag and send the message.

[Tweet](https://twitter.com/intent/tweet?original_referer=http%3A%2F%2Fweblogs.asp.net/dwahlin&amp;text=RT%20for%20a%20chance%20to%20win%20a%20free%20copy%20of%20the%20new%20AngularJS%20Custom%20Directives%20course%20by%20%40DanWahlin!%20%20%23angularjsdirectives&amp;tw_p=tweetbutton&amp;url=http://ow.ly/JueA1)

## AngularJS Custom Directives Course Overview

Are you interested in learning how to take your AngularJS skills to the next level? Have you been confused by terms like tranclusion, isolate scope, interpolation, local scope properties, and more? Have you wanted to build custom directives but didn&apos;t know where to start? Look no further than the [AngularJS Custom Directives](https://www.udemy.com/angularjs-custom-directives/?couponCode=ajsd-29bucks#/) course!

Throughout this course you&apos;ll be provided with a step-by-step look at the process of creating custom directives and cover key concepts that you need to know to take your AngularJS skills to the next level. Topics such as the $compile service, the directive definition object (DDO), the link() function, isolate scope, controller integration, transclusion, working with the $interpolate service, $asyncValidators, and much more are covered as well as techniques for structuring AngularJS directive code.

In addition to expert instruction by AngularJS [Google Developer Expert](https://developers.google.com/experts/people/dan-wahlin) (GDE) Dan Wahlin you&apos;ll also be provided with hands-on code samples that you can follow along with at your own pace. Just pause or rewind the video if you need to see the code again or jump right to the end solution that&apos;s provided if you&apos;d like. Begin and end code is provided so that you can maximize your learning and become an expert in building directives!

The modules covered in the course include:

- Getting Started with Directives
- Shared and Isolate Scope
- The Link Function
- Using Controllers in Directives
- Bonus Content (several robust directive examples)

Many additional details are provided throughout the modules including coverage of the $parse and $interpolate services, how controllers and the link function can be used in concert, why and how to use transclusion to merge custom content, pros and cons of available coding approaches for custom directives, techniques for passing parameter data to functions when using local scope properties, and much, much more.

View a complete list of [topics covered here](https://www.udemy.com/angularjs-custom-directives/?couponCode=ajsd-29bucks#/).</content:encoded></item><item><title>Adding Azure Active Directory Configuration Code and Assemblies into an AngularJS/ASP.NET MVC Application</title><link>https://blog.codewithdan.com/adding-azure-active-directory-configuration-code-and-assemblies-into-an-angularjsasp-net-mvc-application/</link><guid isPermaLink="true">https://blog.codewithdan.com/adding-azure-active-directory-configuration-code-and-assemblies-into-an-angularjsasp-net-mvc-application/</guid><pubDate>Sun, 25 Jan 2015 00:00:00 GMT</pubDate><content:encoded>In a [previous post](http://weblogs.asp.net/dwahlin/registering-a-custom-angularjs-application-with-azure-active-directory) I discussed the process for registering an application with [Azure Active Directory](http://azure.microsoft.com/en-us/services/active-directory/) (AAD) so that users can be authenticated. AAD supports a wide range of features that can be used to perform authentication, authorization, and claims-based security tasks.

Once an application has been registered with AAD you’ll need to add configuration code into the application’s web.config file, add related NuGet packages, and add custom C# code into the application in order to take advantage of AAD authentication functionality. In Part 3 of an article series I’m writing for [http://itunity.com](http://itunity.com) I discuss these tasks and walk-through the complete process. Here’s an excerpt from the article.  

## Integrating AngularJS with Azure Active Directory and Office 365/SharePoint, Part 3 – Adding AAD Configuration and Assemblies into an Application

You can choose from many different techniques to authenticate users in an application. You can build a custom solution using a file or database, you can use Active Directory, you can deploy a third-party solution, or you can use a cloud-based service, to name just a few. Every situation is unique so the authentication choice made really depends upon the requirements of the application. In cases where an application can be used from a variety of locations and devices, a cloud-based authentication solution can work quite well.

[Part 2](http://www.itunity.com/article/integrating-angularjs-o365-aad-registering-custom-app-720) of this article series showed the process for registering an application with the [Azure Active Directory](http://azure.microsoft.com/en-us/services/active-directory/) (AAD) cloud service. Once you’ve registered an application, you can configure the Client ID and key created in AAD in your custom application. You can then add AAD assemblies to provide authentication functionality.

In this article, I’ll discuss how to get the necessary configuration and assemblies in place to tie the AngularJS Expense Manager application discussed in [Part 1](http://www.itunity.com/article/integrating-angularjs-aad-office-365sharepoint-part-1-622) to AAD so that users can authenticate into Office 365 services. In Part 4 I will provide a walk-through of the code that needs to be added into the application to enable user authentication with AAD.

Let’s start by discussing the assemblies that need to be installed into the application and how you can simplify that process by using NuGet.  

### Installing the required assemblies using NuGet

Once an application has been registered with AAD (refer to the Part 2 article for step-by-step details on doing that) you can begin the process of integrating AAD authentication into the application. To get started, you’ll need to add several assemblies into your project including some AAD-specific assemblies as well as OWIN assemblies. The AAD assemblies will be used to communicate with AAD while the OWIN assemblies will be used to hook AAD authentication into the custom application. Here’s a step-by-step walkthrough of getting the assemblies into a project.

Create a new ASP.NET MVC application in Visual Studio (the sample application is named “ExpenseManager”). Now select Tools → NuGet Package Manager → Package Manager Console from the Visual Studio menu.

Once the Package Manager Console is open, run the following commands in it to get the necessary assemblies into place as shown in Figure 1. 

Figure 1. Installing NuGet packages using the Package Manager Console.  

Type each of the following commands into the command-prompt area and press Enter to install the specific Nuget package:

Install-Package -Id Microsoft.Owin.Host.SystemWeb  
Install-Package -Id Microsoft.Owin.Security.Cookies  
Install-Package -Id Microsoft.Owin.Security.OpenIdConnect  
Install-Package -Id Microsoft.IdentityModel.Clients.ActiveDirectory

  
After installing the Nuget packages, open the References node in the Solution Explorer and note that several new assemblies have been added that are related to AAD and authentication. Figures 2 and 3 show a few of the assemblies that are added:  

Figure 2. Assemblies related to AAD.

Figure 3. OWIN assemblies.

Now that the essential assemblies are installed, it’s time to add the application’s AAD Client ID and Key into the application. These values are needed to “hook” the application to AAD.

Note that all of the code that follows is from the Expense Manager application that’s available on the [OfficeDev GitHub site](https://github.com/OfficeDev/SP-AngularJS-ExpenseManager-Code-Sample).

Read the full article at [http://www.itunity.com/article/integrating-aad-services-angularjs-office-365-part-3-770](http://www.itunity.com/article/integrating-aad-services-angularjs-office-365-part-3-770 &quot;http://www.itunity.com/article/integrating-aad-services-angularjs-office-365-part-3-770&quot;).</content:encoded></item><item><title>Adding Azure Active Directory and OWIN Code into an AngularJS/ASP.NET MVC Application to Handle User Authentication</title><link>https://blog.codewithdan.com/adding-azure-active-directory-and-owin-code-into-an-angularjsasp-net-mvc-application-to-handle-user-authentication/</link><guid isPermaLink="true">https://blog.codewithdan.com/adding-azure-active-directory-and-owin-code-into-an-angularjsasp-net-mvc-application-to-handle-user-authentication/</guid><pubDate>Sun, 25 Jan 2015 00:00:00 GMT</pubDate><content:encoded>In a [previous post](http://www.itunity.com/article/integrating-aad-services-angularjs-office-365-part-3-770) I discussed how to setup the necessary configuration code and assemblies in an AngularJS/ASP.NET MVC application in order to authenticate users against [Azure Active Directory](http://azure.microsoft.com/en-us/services/active-directory/) (AAD). Once the initial configuration is complete you can write code to redirect users to the AAD login screen to retrieve an ID token.

In Part 4 of an article series I’m writing for [http://itunity.com](http://itunity.com) I discuss the necessary code that’s required to authenticate a user and retrieve the ID token. Additional topics covered include hooking AAD into the ASP.NET MVC pipeline, creating an Entity Framework token cache, triggering authentication against AAD in MVC controllers, and more. Here’s an excerpt from the article. The complete code for the application discussed in the article series can be found on the [OfficeDev Github site](https://github.com/OfficeDev/SP-AngularJS-ExpenseManager-Code-Sample).  

## Adding AAD Configuration and Assemblies into an Application

[Part 3](http://www.itunity.com/article/integrating-aad-services-angularjs-office-365-part-3-770) of this series covered how to access the Client ID, Key, and Tenant ID values from Azure Active Directory (AAD) and add them into web.config. It also showed how to get the necessary AAD and OWIN NuGet packages in place and create a _SettingsHelper_ class to simplify the process of accessing web.config values.

In this article, you’ll see how the values defined in web.config can be used to associate a custom application with AAD. Topics covered include setting up a token storage cache, hooking AAD code into the OWIN startup process, and creating an ASP.NET MVC controller to display a login page to end users and handle directing them to the application upon successful authentication. All of the code that follows is from the [Expense Manager application](https://github.com/OfficeDev/SP-AngularJS-ExpenseManager-Code-Sample) that’s available on the OfficeDev GitHub site.

Let’s kick things off by looking at AAD token storage and the role it plays in applications.  

### AAD Token Storage

The NuGet packages added into the application (see [Part 3](http://www.itunity.com/article/integrating-aad-services-angularjs-office-365-part-3-770) of this series) provide the necessary functionality to authenticate a user with AAD. Once authenticated, AAD will return an ID token that can be stored by the application and used as secured resources such as Web API, Office 365 APIs, or other resources are accessed. The AAD documentation provides a diagram that sums up the overall authentication workflow well:  

[](http://www.itunity.com/content/content/771/wahlin_fig1.png)

  
Figure 1. The flow as a user logs into AAD, gets an ID token, and then accesses an application and additional resources. This image is from [http://msdn.microsoft.com/en-us/library/azure/dn499820.aspx](http://msdn.microsoft.com/en-us/library/azure/dn499820.aspx).

While the token received by the application can be stored in memory during development, it’s recommended that a more robust token store be put in place to handle that task for production. You can find several sample applications that integrate with AAD and handle tokens on the [Azure Active Directory Github samples site](https://github.com/AzureADSamples). Some of the samples use an in-memory store while others rely on a database, which is recommended when an app is ready to move to production.

In this part of the article, you’ll see the steps required to get an Entity Framework and SQL Server token store in place. If you don’t already have the Entity Framework NuGet package installed in your project you’ll need to install it.

From a high level, the following tasks will be discussed:

1. Create a model class named _PerWebUserCache_ that’s used to define properties that are needed to store and retrieve AAD tokens.
2. Create an Entity Framework DbContext class named _EFADALContext_ that interacts with a SQL Server database.
3. Create a token cache class named _EFADALTokenCache_ that uses the context class to store and retrieve tokens from a SQL Server database.
4. Add startup code that uses the previous classes and is responsible for handling authentication as a user requests a secure resource.

To get started, add a new class named _PerWebUserCache.cs_ into the _Models_ folder of an ASP.NET MVC application. This code defines properties that will be used by the token store. Add the following code into the class:

```
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
using System.Web;

namespace ExpenseManager.Models
{
   public class PerWebUserCache
   {
       [Key]
       public int EntryId { get; set; }
       public string webUserUniqueId { get; set; }
       public byte[] cacheBits { get; set; }
       public DateTime LastWrite { get; set; }
   }
}
```

  
Listing 1. The PerWebUserCache model class defines properties that will be used by the token store.  

Now add a class named _EFADALContext.cs_ into the _Utils_ folder. This class will derive from Entity Framework’s _DbContext_ class and handle mapping the _PerWebUserCache_ object to the proper table in SQL Server. Add the following code into the _EFADALContext_ class:

```
using ExpenseManager.Models;
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.Entity.ModelConfiguration.Conventions;
using System.Linq;
using System.Web;

namespace ExpenseManager.Utils
{
   public class EFADALContext : DbContext
   {
       public EFADALContext() : base(&quot;EFADALContext&quot;)
       {
       }
       public DbSet&lt;PerWebUserCache&gt; PerUserCacheList { get; set; }
       protected override void OnModelCreating(DbModelBuilder modelBuilder)
       {
           modelBuilder.Conventions.Remove&lt;PluralizingTableNameConvention&gt;();
       }
   }
}
```

  
Listing 2. The EFADALContext class handles table creation and queries with SQL Server.  

Now that the model and database context classes have been created, a token cache class named _EFADALTokenCache_ can be created that uses the context to store and retrieve AAD tokens. This is one of many [classes](https://github.com/AzureADSamples/WebApp-WebAPI-MultiTenant-OpenIdConnect-DotNet/blob/master/TodoListWebApp%2FDAL%2FEFADALTokenCache.cs) provided by the AAD samples site on GitHub that was mentioned earlier.

Add a class named _EFADALTokenCache.c_s into the _Utils_ folder that has the following code in it:

```
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using ExpenseManager.Models;
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Web;

namespace ExpenseManager.Utils
{
   public class EFADALTokenCache : TokenCache
   {
       private EFADALContext _Context = new EFADALContext();
       string User;
       PerWebUserCache Cache;

       // constructor
       public EFADALTokenCache(string user)
       {
           // associate the cache to the current user of the web app
           User = user;
           this.AfterAccess = AfterAccessNotification;
           this.BeforeAccess = BeforeAccessNotification;
           this.BeforeWrite = BeforeWriteNotification;

           // look up the entry in the DB
           Cache = _Context.PerUserCacheList.FirstOrDefault(c =&gt; 
                    c.webUserUniqueId == User);
           // place the entry in memory
           this.Deserialize((Cache == null) ? null : Cache.cacheBits);
       }

       // clean up the DB
       public override void Clear()
       {
           base.Clear();
           foreach (var cacheEntry in _Context.PerUserCacheList)
               _Context.PerUserCacheList.Remove(cacheEntry);
           _Context.SaveChanges();
       }

       // Notification raised before ADAL accesses the cache.
       // This is your chance to update the in-memory copy from the DB
       // if the in-memory version is stale.
       void BeforeAccessNotification(TokenCacheNotificationArgs args)
       {
           if (Cache == null)
           {
               // first time access
               Cache = _Context.PerUserCacheList.FirstOrDefault(c =&gt; 
                        c.webUserUniqueId == User);
           }
           else
           {   // retrieve last write from the DB
               var status = from e in _Context.PerUserCacheList
                            where (e.webUserUniqueId == User)
                            select new
                            {
                                LastWrite = e.LastWrite
                            };
               // if the in-memory copy is older than the persistent copy
               if (status.First().LastWrite &gt; Cache.LastWrite)

               // read from from storage, update in-memory copy
               {
                   Cache = _Context.PerUserCacheList.FirstOrDefault(
                            c =&gt; c.webUserUniqueId == User);
               }
           }
           this.Deserialize((Cache == null) ? null : Cache.cacheBits);
       }

       // Notification raised after ADAL accessed the cache.
       // If the HasStateChanged flag is set, ADAL changed the content of the cache
       void AfterAccessNotification(TokenCacheNotificationArgs args)
       {
           // if state changed
           if (this.HasStateChanged)
           {
               Cache = new PerWebUserCache
               {
                   webUserUniqueId = User,
                   cacheBits = this.Serialize(),
                   LastWrite = DateTime.Now
               };

               // update the DB and the lastwrite                
               _Context.Entry(Cache).State = Cache.EntryId == 0 ? 
                  EntityState.Added : EntityState.Modified;
               _Context.SaveChanges();
               this.HasStateChanged = false;
           }
       }
       void BeforeWriteNotification(TokenCacheNotificationArgs args)
       {
           // if you want to ensure that no concurrent write takes place, 
           // use this notification to place a lock on the entry
       }
   }
}
```

  

Listing 3. The EFADALTokenCache class is responsible for storing and retrieving AAD tokens used by the application.  

This code handles managing an in-memory store that is backed by a database. As changes are made or the local cache becomes stale, calls are made to update the proper fields in the database. With the EFADALTokenCache class in place along with the model and database context classes, it’s now time to add AAD code in the application to allow users to be authenticated. This code will tie AAD into OWIN so that any secured application resources trigger the authentication process.

Read the full article at [http://www.itunity.com/article/integrating-angularjs-azure-active-directory-services-office-365sharepoint-part-4-771](http://www.itunity.com/article/integrating-angularjs-azure-active-directory-services-office-365sharepoint-part-4-771 &quot;http://www.itunity.com/article/integrating-angularjs-azure-active-directory-services-office-365sharepoint-part-4-771&quot;).</content:encoded></item><item><title>Creating Custom AngularJS Directives Part 6 - Using Controllers</title><link>https://blog.codewithdan.com/creating-custom-angularjs-directives-part-6-using-controllers/</link><guid isPermaLink="true">https://blog.codewithdan.com/creating-custom-angularjs-directives-part-6-using-controllers/</guid><pubDate>Mon, 29 Dec 2014 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/creating-custom-angularjs-directives-part-6-using-controllers/AngularJS_thumb_1008B166.webp) 

&lt;script type=&quot;text/javascript&quot;&gt;// &lt;![CDATA[ document.write(getHtmlFragment(&apos;angularjsGetStarted&apos;)); // ]]&gt;&lt;/script&gt;

##   
Creating Custom AngularJS Directives Series

&lt;script type=&quot;text/javascript&quot;&gt;// &lt;![CDATA[ document.write(getHtmlFragment(&apos;customDirectivesLinks&apos;)); // ]]&gt;&lt;/script&gt;

Up to this point in the AngularJS directives series you’ve learned about many key aspects of directives but haven’t seen anything about how controllers fit into the picture. Although controllers are typically associated with routes and views, they can also be embedded in AngularJS directives. In fact, there are many scenarios where custom directives can take advantage of controllers to minimize code and simplify maintenance. While using controllers in directives is certainly optional, if you’d prefer to build directives using similar techniques that you use now to build views then you’ll find controllers are essential in many cases. By using controllers, directives start to feel like “child views”.

In this post I’ll walk through the process of integrating controllers into directives and show the role that they can play. Let’s start off by looking at a directive that doesn’t use a  controller and talk through the pros and cons of the approach.  
  

## A Directive without a Controller

  
Directives provide several different ways to render HTML, collect data, and perform additional tasks. In situations where a directive is performing a lot of DOM manipulation, using the **link** function makes sense.  Here’s a simple example of the **link** function in action:  

```
(function() {

  var app = angular.module(&apos;directivesModule&apos;);

  app.directive(&apos;domDirective&apos;, function () {
      return {
          restrict: &apos;A&apos;,
          link: function ($scope, element, attrs) {
              element.on(&apos;click&apos;, function () {
                  element.html(&apos;You clicked me!&apos;);
              });
              element.on(&apos;mouseenter&apos;, function () {
                  element.css(&apos;background-color&apos;, &apos;yellow&apos;);
              });
              element.on(&apos;mouseleave&apos;, function () {
                  element.css(&apos;background-color&apos;, &apos;white&apos;);
              });
          }
      };
  });

}());
```

Adding a controller into this directive doesn’t make much sense given that the goal is to handle events and manipulate the DOM. Although it would be possible to accomplish the same task using a view in the directive (along with built-in AngularJS directives such as ng-click) and controller there’s really no reason to add a controller into this directive if DOM manipulation is the overall end goal.

In cases where you’re manipulating the DOM, integrating data into the generated HTML, handling events, and more, adding a controller can minimize the amount of code you write and simplify the overall process in some cases. To make this more clear, let’s look at an example of a directive then renders a list of items and provides a button that can be used to add items to the list. Here’s the simple output that the directive renders:

[![image_thumb[2]](/images/blog/creating-custom-angularjs-directives-part-6-using-controllers/image_thumb%5B2%5D_thumb.png &quot;image_thumb[2]&quot;)](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Creating-Custom-AngularJS-Directives-Par_D3AB/image_thumb%5B2%5D_2.png)

There are several different ways to handle rendering this type of UI. A typical DOM-centric approach would use the **link** function to handle everything as shown in the following directive. Keep in mind there are many ways to do this and the overall goal is to demonstrate DOM-centric code (as simply as possible):

```
(function() {

  var app = angular.module(&apos;directivesModule&apos;);

  app.directive(&apos;isolateScopeWithoutController&apos;, function () {
      
      var link = function (scope, element, attrs) {
              
              //Create a copy of the original data that’s passed in              
              var items = angular.copy(scope.datasource);
              
              function init() {
                  var html = &apos;&lt;button id=&quot;addItem&quot;&gt;Add Item&lt;/button&gt;&lt;div&gt;&lt;/div&gt;&apos;;
                  element.html(html);
                  
                  element.on(&apos;click&apos;, function(event) {
                      if (event.srcElement.id === &apos;addItem&apos;) {
                          addItem();
                          event.preventDefault();
                      }
                  });
              }
              
              function addItem() {
                  //Call external function passed in with &amp;
                  scope.add();

                  //Add new customer to the local collection
                  items.push({
                      name: &apos;New Directive Customer&apos;
                  });
                  
                  render();
              }
              
              function render() {
                  var html = &apos;&lt;ul&gt;&apos;;
                  for (var i=0,len=items.length;i&lt;len;i++) {
                      html += &apos;&lt;li&gt;&apos; + items[i].name + &apos;&lt;/li&gt;&apos;
                  }
                  html += &apos;&lt;/ul&gt;&apos;;                  
                  
                  element.find(&apos;div&apos;).html(html);
              }
              
              init();
              render();        
      };
      
      
      return {
          restrict: &apos;EA&apos;,
          scope: {
              datasource: &apos;=&apos;,
              add: &apos;&amp;&apos;,
          },
          link: link
      };
  });

}());
```

  

Although this code gets the job done, it’s more along the lines of a jQuery plugin and takes what I refer to as a “[control-oriented](http://weblogs.asp.net/dwahlin/The-JavaScript-Cheese-is-Moving_3A00_-Data_2D00_Oriented-vs.-Control_2D00_Oriented-Programming)” approach where tag names and/or IDs are prevalent in the code. All of the DOM manipulation is handled manually which is fine and maybe even preferred in some cases (for performance reasons for example), but it’s definitely not the normal way we build Angular apps. The DOM manipulation code is mixed in with the scope which starts to get messy especially as the directive grows in size.

As the button is clicked the **addItem()** function is called which handles calling an isolate scope property (**add**) and invoking the **render()** function which renders a &lt;ul&gt; tag and multiple &lt;li&gt; tags. There’s nothing wrong with this approach per se, but I’m not a fan of having a lot of separate strings embedded in the JavaScript since they can cause a maintenance nightmare over time. While a small directive like this is fairly easy to maintain, the code can get more challenging as the directive has additional features added.

There’s also a more subtle issue at play in this code. When **scope.add()** is called the invoked parent scope function will need to use **$scope.$apply()** to update any properties in the parent scope since the call to **add** is being made from vanilla JavaScript rather than from within the context of AngularJS (something that’s outside the scope of this post, but definitely important to consider). Finally, the directive doesn’t resemble the “child view” concept that was mentioned at the beginning of the post – it’s just a bunch of code. How can a controller help out in this example? Let’s take a look.

## Adding a Controller and View into a Directive

  
The directive shown in the previous section gets the job done but what if you could write it much like you’d write a standard AngularJS view and use a more [data-oriented](http://weblogs.asp.net/dwahlin/The-JavaScript-Cheese-is-Moving_3A00_-Data_2D00_Oriented-vs.-Control_2D00_Oriented-Programming) approach as opposed to the [control-oriented](http://weblogs.asp.net/dwahlin/The-JavaScript-Cheese-is-Moving_3A00_-Data_2D00_Oriented-vs.-Control_2D00_Oriented-Programming) approach that the DOM takes? By using a controller and view in a directive the development process feel more along the lines of what you do everyday in AngularJS applications.

Here’s an example of converting the directive shown earlier into a cleaner version (in my opinion anyway) that relies on a controller and a simple view:

```
(function() {

  var app = angular.module(&apos;directivesModule&apos;);

  app.directive(&apos;isolateScopeWithController&apos;, function () {
      
    var controller = [&apos;$scope&apos;, function ($scope) {

          function init() {
              $scope.items = angular.copy($scope.datasource);
          }

          init();

          $scope.addItem = function () {
              $scope.add();

              //Add new customer to directive scope
              $scope.items.push({
                  name: &apos;New Directive Controller Item&apos;
              });
          };
      }],
        
      template = &apos;&lt;button ng-click=&quot;addItem()&quot;&gt;Add Item&lt;/button&gt;&lt;ul&gt;&apos; +
                 &apos;&lt;li ng-repeat=&quot;item in items&quot;&gt;{{ ::item.name }}&lt;/li&gt;&lt;/ul&gt;&apos;;
      
      return {
          restrict: &apos;EA&apos;, //Default in 1.3+
          scope: {
              datasource: &apos;=&apos;,
              add: &apos;&amp;&apos;,
          },
          controller: controller,
          template: template
      };
  });

}());
```

  

The directive could be used in one of the following ways:

```
Attribute: &lt;div isolate-scope-with-controller datasource=&quot;customers&quot; add=&quot;addCustomer()&quot;&gt;&lt;/div&gt;

Element: &lt;isolate-scope-with-controller datasource=&quot;customers&quot; add=&quot;addCustomer()&quot;&gt;         &lt;/isolate-scope-with-controller&gt;
```

Looking through the directive code you can see that it’s very similar to the approach you’d take for writing a normal view with a controller. I’d argue that it looks like you’re writing a “child view” as mentioned at the beginning of this post since the code is focused more on the data and less on the “controls” in the view. The view takes advantage of AngularJS directives to handle various control rendering tasks which eliminates all of the DOM code that had to be written before.

The view is defined using the **template** property and the controller is defined using the **controller** property. Keep in mind that the view can be loaded from a file using the **[templateUrl](https://docs.angularjs.org/api/ng/service/$compile)** property or from the **[$templateCache](https://docs.angularjs.org/api/ng/service/$templateCache)** as well – it doesn’t have to be embedded directly in the directive. The **templateUrl** or **$templateCache** options are really useful when a view has a lot of HTML in it that you don’t want embedded in the directive.

As mentioned, the code in the view leverages existing AngularJS directives such as **ng-click** and **ng-repeat** and also uses **{{ … }}** data binding expressions. This eliminates the DOM code shown earlier in the DOM-centric/control-oriented directive. The controller has the **$scope** injected as you’d expect and uses it to define an **items** property which is consumed by **ng-repeat** in the view to generate &lt;li&gt; tags. As the button in the view is clicked the **addItem()** function on the **$scope** is invoked which calls the **add** isolate scope property and adds a new item object into the local collection (since angular.copy() is used items added into the local collection won’t show up in the parent scope). Because **addItem()** is called using **ng-click**, the parent scope call that is made ($scope.add()) won’t need to worry about using **$scope.$apply()** as mentioned in the earlier section.

In situations where a directive is being written with raw performance in mind then the DOM-centric approach shown earlier may be preferred I realize since you’d be purposely taking over control of the HTML that’s generated and avoiding the use of Angular directives. If you ever attend one of my conference sessions or training classes you’ll often hear me say, “Use the right tool for the right job”. I’ve never believed that “one size fits all” and know that each situation and application is unique.

This thought process definitely applies to directives since there are many different ways to write them.  In many situations I’m happy with how AngularJS performs and know about the pitfalls to avoid so I prefer the controller/view type of directive whenever possible. It makes maintenance much easier down the road since you can leverage existing Angular directives in the directive’s view and modify the view using a controller and scope. If, however, I was trying to maximize performance and eliminate the use of directives such as ng-repeat then going the DOM-centric route with the **link** function might be a better choice. Again, choose the right tool for the right job.

## Using controllerAs in a Directive

If you’re a fan of the controllerAs syntax you may be wondering if the same style can be used inside of directives. The answer is “yes”! When you define a Directive Definition Object (DDO) in a directive you can add a **controllerAs** property. Starting with Angular 1.3 you’ll also need to add a **[bindToController](https://docs.angularjs.org/api/ng/service/$compile)** property as well to ensure that properties are bound to the controller rather than to the scope. Here’s an example of the previous directive that has been converted to use the controllerAs syntax:

```
(function() {

  var app = angular.module(&apos;directivesModule&apos;);

  app.directive(&apos;isolateScopeWithControllerAs&apos;, function () {
      
      var controller = function () {
          
              var vm = this;
          
              function init() {
                  vm.items = angular.copy(vm.datasource);
              }
              
              init();
              
              vm.addItem = function () {
                  vm.add();

                  //Add new customer to directive scope
                  vm.items.push({
                      name: &apos;New Directive Controller Item&apos;
                  });
              };
      };    
      
      var template = &apos;&lt;button ng-click=&quot;vm.addItem()&quot;&gt;Add Item&lt;/button&gt;&apos; +
                     &apos;&lt;ul&gt;&lt;li ng-repeat=&quot;item in vm.items&quot;&gt;{{ ::item.name }}&lt;/li&gt;&lt;/ul&gt;&apos;;
      
      return {
          restrict: &apos;EA&apos;, //Default for 1.3+
          scope: {
              datasource: &apos;=&apos;,
              add: &apos;&amp;&apos;,
          },
          controller: controller,
          controllerAs: &apos;vm&apos;,
          bindToController: true, //required in 1.3+ with controllerAs
          template: template
      };
  });

}());
```

  

Notice that a controller alias of **vm** (short for “ViewModel”) has been assigned to the **controllerAs** property and that the alias is used in the controller code and in the view. The **bindToController** property is set to **true** to ensure that properties are bound to the controller instead of the scope. While this code is very similar to the initial controller example shown earlier, it allows you to use “dot” syntax in the view (vm.customers for example) which is a recommended approach.  
  

## Conclusion

  
Controllers can be used to cleanup directives in many scenarios. Although using a controller isn’t always necessary, you’ll find that by levering the “child view” concept in directives your code can be kept more maintainable and easier to work with. The next post in the series moves on to discuss additional features that can be used in directives such as $asyncValidators.</content:encoded></item><item><title>Getting Started with ES6 – Transpiling ES6 to ES5 with Traceur and Babel</title><link>https://blog.codewithdan.com/getting-started-with-es6-transpiling-es6-to-es5-with-traceur-and-babel/</link><guid isPermaLink="true">https://blog.codewithdan.com/getting-started-with-es6-transpiling-es6-to-es5-with-traceur-and-babel/</guid><pubDate>Tue, 16 Dec 2014 00:00:00 GMT</pubDate><content:encoded>In the [first post](https://weblogs.asp.net/dwahlin/getting-started-with-es6-%E2%80%93-the-next-version-of-javascript) in this series I introduced key features in ECMAScript 6 (ES6), discussed tools that can be used today to transpile code to ES5 so that it can work in today’s browsers, and listed several resources that will help get you started. Before jumping into the first official ES6 feature (that’s coming in the next post) I wanted to write a step-by-step walkthrough that covers how to get the [Traceur](https://github.com/google/traceur-compiler) and [Babel](https://babeljs.io/) transpilers working with [Gulp](http://gulpjs.com/) (a JavaScript task runner). I’m also going to sneak in a little TypeScript as well since it’s another option. By getting these tools in place you can start writing ES6 code, convert/transpile it to ES5, and then use the generated code in older browsers. Going that route lets you take advantage of the future of JavaScript right now without having to wait around until all of the browsers fully support ES6.[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/2272876fb5e4_BAB3/image_8.png)

Two options are available in this post. If you want a step-by-step look at getting Gulp setup to work with ES6 transpilers then I’d recommend reading the entire post. If you want to start working with ES6 but aren’t interested in getting the ES6 transpilers setup then skip to the **Configuring ES6 Samples** section below where you can download a project that has everything in place and discusses how to get started using it.

Let’s jump into a step-by-step walk-through of setting up Gulp, Traceur and Babel.

## Installing Node.js and Gulp

  
Transpilers such as [Traceur](https://github.com/google/traceur-compiler) and [Babel](https://babeljs.io/) can be run directly from the command line which makes it fairly trivial to convert ES6 code to ES5. However, after performing a command-line task a few times you’ll begin to wonder if there’s a way to automate the process. The good news is that JavaScript task runner tools such as [Grunt](http://gruntjs.com/) or [Gulp](http://gulpjs.com/) can automate just about any JavaScript task you can think of. You can use them to perform a variety of tasks such as restarting a Node.js server if it dies, “live” reloading a webpage as HTML or CSS code changes, concatenating and minifying JavaScript files, finding unused CSS classes in a file, plus much more. You can also use these tools to automatically convert ES6 code to ES5 as you save a code file.

While both tools get the job done well, I personally prefer Gulp so the steps that follow will show how to use it. I’m going to assume that you haven’t done much with Node.js or Gulp so if you’re already a Node.js or Gulp expert you can gloss over some of the steps that follow. Let’s get started by getting Node.js and Gulp installed.

### Step 1: Install Node.js

  
Gulp requires [Node.js](http://nodejs.org) so the first thing you’ll need to do (if you haven’t done it already) is install Node.js on your machine. Navigate to [http://nodejs.org](http://nodejs.org) in the browser and click the **Install** button to download the Node.js installation file.  
  

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/2272876fb5e4_BAB3/image_2.png)  
  

Once the file downloads, double-click it and follow the instructions to install Node.js on your machine.

  
 

### Step 2: Use npm to Install Gulp

  
When you install Node.js you also get access to another tool called [npm](https://www.npmjs.com/) that can be used to install Node.js modules. Gulp is one of many modules that are available (see [https://www.npmjs.com](https://www.npmjs.com) for a complete list).  Follow these steps to use npm to install Gulp.

  
**Note for Mac Users**: You may need to add “sudo” in front of the install commands below if they fail. If you don’t want to use sudo (for security reasons) check out [this post](http://howtonode.org/introduction-to-npm).

1. Create the following folder structure on your desktop (or anywhere else you’d like to create it):  
      
    [](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/2272876fb5e4_BAB3/image_10.png)  
      
    
2. Open a command-prompt and navigate to the **es6Demos** folder.  
      
    
3. Type the following at the command-prompt:  
      
    **npm init  
      
    **
4. Running this command will start a command-line wizard that’s used to generate a file named **package.json**.  
      
    
5. Accept all of the defaults for now (or you can supply values for the author name, description, etc. if you’d like) by pressing Return/Enter for each question asked.  
      
    
6. At the end of the wizard you’ll be asked to type “yes” to complete the process. The **package.json** file that’s generated is used to store all of the modules that your app may use (such as Gulp and others).  
      
    
7. Type the following at the command-prompt to install the Gulp module globally on your machine:  
      
    **npm install gulp -g  
      
      
    **
8. Next type the following at the command-prompt to add Gulp to your **package.json** file’s dev dependencies. This will cause the **package.json** file to be updated and add a **node\_modules** folder into the **es6Demos** folder that contains Gulp-related code/files.  
      
    **npm install gulp --save-dev  
    **
9. To try out Gulp, type the following at the command-prompt and press Enter/Return. You should see an error saying that no gulpfile was found.  
      
    **gulp  
    **  
      
    

### Step 3: Install Traceur, Babel, and Additional Gulp Modules

  
Now that Node.js and Gulp are available it’s time to load the **Traceur** and **Babel** Gulp modules so that we can use them to transpile ES6 code to ES5. In a “real” app you’d choose one of these rather than using both at the same time, but you’ll configure both here so that you can see the type of ES5 code that they generate.

I’m going to show the entire process for getting Gulp modules in place. At the end of the steps I’ll show how simple it is to get everything going using npm and the package.json file though so that in the future this process is super quick.

1. Run the following commands at the command-prompt (it should still be at the **es6Demos** folder) to install Gulp modules (and some others) that can be used to transpile ES6 to ES5. As mentioned above, I’m showing how to install each individual Gulp module so that you understand the process. It’s a one-time setup process and can be re-used once in place as you’ll see at the end of this section.  
      
    **npm install gulp-babel --save-dev  
    npm install gulp-traceur --save-dev  
    npm install gulp-typescript --save-dev  
    npm install gulp-plumber --save-dev  
    npm install gulp-concat --save-dev  
    npm install gulp-uglify --save-dev  
    **  
    Although the gulp-plumber, gulp-concat, gulp-typescript, and gulp-uglify modules are optional, they’re quite useful in a real-life workflow. The gulp-plumber module can help fix errors that occur in the Gulp pipeline while gulp-concat and gulp-uglify can be used to concatenate and minify JavaScript files to get them ready for production. I won’t focus on them in this post but they’re good to have in place and use once you’re ready to move files into production.  
      
    
2. Open the **package.json** file that’s in the **es6Demos** folder in a text editor and notice that a **devDependencies** property has been added along with details about each module that you installed earlier including their version number. The file should look something like the following (note that I removed a few properties that aren’t needed in the code below):  
      
    
    ```
    {
      &quot;name&quot;: &quot;ES6Demos&quot;,
      &quot;version&quot;: &quot;1.0.0&quot;,
      &quot;description&quot;: &quot;&quot;,
      &quot;author&quot;: &quot;&quot;,
      &quot;license&quot;: &quot;ISC&quot;,
      &quot;devDependencies&quot;: {
        &quot;gulp&quot;: &quot;^3.8.10&quot;,
        &quot;gulp-babel&quot;: &quot;^4.0.1&quot;,
        &quot;gulp-concat&quot;: &quot;^2.4.2&quot;,
        &quot;gulp-plumber&quot;: &quot;^0.6.6&quot;,
        &quot;gulp-traceur&quot;: &quot;^0.14.1&quot;,
        &quot;gulp-typescript&quot;: &quot;^2.3.0&quot;,
        &quot;gulp-uglify&quot;: &quot;^1.0.2&quot;
      }
    }
    ```
    
3. Open the **node\_modules** folder and notice the subfolders that are there now.

  
 

### Step 4: Creating a gulpfile

  
Now that all of the necessary modules are in place it’s time to create a Gulp task runner file name **gulpfile.js**. This file is responsible for defining tasks that use the Gulp modules installed earlier such as Traceur and Babel.  
  

1. Create a new file in the **es6Demos** folder named **gulpfile.js**.  
      
    
2. Add the following code into **gulpfile.js** to load the Gulp modules installed earlier and define a few paths:  
      
      
    
    ```
    var gulp = require(&apos;gulp&apos;),
        traceur = require(&apos;gulp-traceur&apos;),
        babel = require(&apos;gulp-babel&apos;),
        plumber = require(&apos;gulp-plumber&apos;),
        es6Path = &apos;es6/*.js&apos;,
        compilePath = &apos;es6/compiled&apos;;
    ```
    
3. Now add the following code under the code in the previous step to create a new **Traceur** task. This adds the plumber module into the streaming process to handle any errors that occur in the the piping process more gracefully and then invokes the **Traceur** transpiler. The blockBinding property allows block level definitions to be used in the ES6 code via a new _let_ keyword (more about that feature in a future post).  
      
      
    
    ```
    gulp.task(&apos;traceur&apos;, function () {
        gulp.src([es6Path])
            .pipe(plumber())
            .pipe(traceur({ blockBinding: true }))
            .pipe(gulp.dest(compilePath + &apos;/traceur&apos;));
    });
    ```
    
4. Now add code to create a **Babel** task:  
      
      
    
    ```
    gulp.task(&apos;babel&apos;, function () {
        gulp.src([es6Path])
            .pipe(plumber())
            .pipe(babel())
            .pipe(gulp.dest(compilePath + &apos;/babel&apos;));
    });
    ```
    
5. The previous tasks will cause ES6 to be transpiled to ES5 (using two different techniques) but they won’t run any time an ES6 file is saved. To automate the process add the following watch task:  
      
      
    
    ```
    gulp.task(&apos;watch&apos;, function() {
    
        gulp.watch([es6Path], [&apos;traceur&apos;, &apos;babel&apos;]);
    
    });
    
    ```
    
6. Finally, create a default task that runs the **traceur**, **babel**, and **watch** tasks when you first start Gulp:  
      
      
    
    ```
    gulp.task(&apos;default&apos;, [&apos;traceur&apos;, &apos;babel&apos;, &apos;watch&apos;]);
    ```
    
7. Save **gulpfile.j**s and run the following command again at the command-prompt:  
      
    **gulp  
      
    **
8. The Gulp tasks should now run and the console should display output about the tasks.  
    
9. Leave the console up and running and continue to the next section.

  
 

## Transpiling ES6 to ES5

  
Now that you have Gulp up and running and the Traceur and ES6 tasks in place it’s time to transpile ES6 to ES5.

1. Add a new file named **car.js** into the **es6** folder.  
      
    
2. Add the following code into **car.js**. I’ll be discussing this code in a future post but in a nutshell, ES6 now supports encapsulating code by using classes.  
      
      
    
    ```
    class Car {
        
        constructor(engine) {
            this.engine = engine;
        }
    
    }
    ```
    
3. Open the **es6/compiled/traceur** folder and open the **car.js** file. It will have the following ES5 code in it:  
      
      
    
    ```
    &quot;use strict&quot;;
    var Car = function Car(engine) {
      this.engine = engine;
    };
    ($traceurRuntime.createClass)(Car, {}, {});
    ```
    
      
      
    
4. The **car.js** file in **es6/compiled/babel** has the following ES5 code:  
      
      
    
    ```
    &quot;use strict&quot;;
    
    var Car = function Car(engine) {
      this.engine = engine;
    };
    ```
    
5. You’ve now successfully transpiled ES6 code to ES5 using both Traceur and Babel.

  
 

## Configuring the ES6 Samples

  
If you decided to skip all of the steps above you can visit [http://github.com/danwahlin/es6samples](http://github.com/danwahlin/es6samples) and click the  **Download ZIP** button to get all of the code (or clone the repository if you’re familiar with Git). Once you have the code extracted follow the steps below to get Gulp and the transpilers setup.

Note that if you’re on a Mac you may need to use “sudo” (prefix each of the npm commands with it) if any of the install commands trigger an error.

1. Install **Node.js** if it’s not already on your system.  
      
    
2. Open a command-prompt and navigate to the project’s root folder that has the **package.json** file in it.  
      
    
3. If you haven’t already installed **Gulp** run the following command:  
      
    **npm install gulp -g  
      
    **
4. Run **npm install** to get the necessary Node.js/Gulp modules installed.  
      
    
5. Run the following command to transpile the JavaScript files from ES6 to ES5 and start the file watcher:  
      
    **gulp  
      
    **
6. Open the **es6/compiled** folder and look at the files that are generated in the two subfolders (traceur and babel).

  
 

## Conclusion

  
Transpilers such as Traceur and Babel provide a way to convert ES6 code to ES5 quickly and easily. In this post you’ve seen a step-by-step walk-through of setting up the Gulp task runner to automate the transpilation process and how it can simplify the overall process of converting ES6 code and ES5. Now that the setup work for transpiling files is in place it’s time to jump into some of the different ES6 features that are available. Stay tuned for future posts.

  
  
**Onsite Developer Training:** If your company is interested in onsite training on JavaScript, ES6, AngularJS, Node.js, C# or other technologies please email [training@wahlinconsulting.com](mailto:training@wahlinconsulting.com) for details about the classes that we offer.</content:encoded></item><item><title>Building Applications with AngularJS, Azure Active Directory, and Office 365/SharePoint</title><link>https://blog.codewithdan.com/building-applications-with-angularjs-azure-active-directory-and-office-365sharepoint/</link><guid isPermaLink="true">https://blog.codewithdan.com/building-applications-with-angularjs-azure-active-directory-and-office-365sharepoint/</guid><pubDate>Sun, 07 Dec 2014 00:00:00 GMT</pubDate><content:encoded>One of my favorite features of Single Page Applications (SPAs) is the ability to integrate data from nearly any backend technology and have it display on a variety devices (desktop browser, mobile, tablet, and more). Whether you’re calling a service like [Firebase](https://www.firebase.com/) or [Azure Mobile Services](http://azure.microsoft.com/en-us/documentation/services/mobile-services/) or hitting a custom REST API, the data that’s returned can be integrated into your SPA regardless of [](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/bc4f2922cfcb_9E75/azure_2.png)the language, technology, or operating system being used on the server.

Some of the backend technologies I’ve been spending a lot of time with from a business perspective lately include [Azure](http://azure.microsoft.com/) and the the [Office 365/SharePoint API](http://msdn.microsoft.com/en-us/office/office365/api/api-catalog). Azure provides a wealth of cloud services such as websites, virtual machines, a cloud-based Active Directory, mobile services, and much more while Office 365/SharePoint provides access to mail, contacts, calendars, files, SharePoint services, and other data services.

We’ve been building AngularJS applications that take advantage of some of the new features provided by these cloud services and writing a series of articles about them for [itunity.com](http://www.itunity.com/) site. The first article is titled [Integrating AngularJS with Azure Active Directory and Office 365/SharePoint, Part 1](http://www.itunity.com/article/integrating-angularjs-aad-office-365sharepoint-part-1-622) and was writing by myself and my good friend [Spike Xavier](http://transmissionit.com/). I’m not able to post the entire article here, but here’s a snippet from it. The full article can be found on the [itunity.com](http://www.itunity.com/article/integrating-angularjs-aad-office-365sharepoint-part-1-622) site.  

## Integrating AngularJS with Azure Active Directory and Office 365/SharePoint, Part 1  

Data is everywhere in today’s business environment and employees expect to be able to access it regardless of where it lives. They don’t care if it’s stored in a database, retrieved from a service, found in a SharePoint list or library, or stored somewhere else up in the cloud. Regardless of where the data lives, they expect to be able to get to it on any operating system, with any browser, and using any device.

With the popularity of SharePoint across many organizations an enormous amount of business data is added into lists and libraries every day. If employees have access to SharePoint directly on their chosen device then a variety of techniques can be used to expose the data to them ranging from SharePoint pages, Web Parts, App parts, Pages, and more. However, if they won’t always be using SharePoint directly to access the data how can you get it to them? Fortunately, this isn’t a new problem and has been possible for many years using SharePoint Web Services or RESTful services. How does that process change though if you’re using Microsoft Azure, Office 365 and SharePoint?[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/bc4f2922cfcb_9E75/office_5.png)

In this article series, we’ll walk you through the process of creating an external application that can interact with Microsoft Azure Active Directory, Office 365 and SharePoint. The overall goal of the series is to show developers that may be new to SharePoint how they can leverage existing HTML and JavaScript development skills to integrate with Azure and pull Office 365/SharePoint data into custom Web applications.

This first article will discuss whether or not a SharePoint-centric application or external application should be considered. It’ll also introduce an Expense Manager application and explain what it offers. Future articles in this series will discuss the technologies used by the Expense Manager application and dive into how AngularJS can be used to build a single-page application (SPA) that can be used by employees on any device throughout an organization.

Let’s get started by discussing whether an application should be embedded directly into SharePoint or if it should be external.  

### Stand-alone or SharePoint hosted? That is the question!  

While it’s true that SharePoint uses SQL Server under the covers, SharePoint itself is NOT a relational database. It’s ultimately up to the developer to understand how SharePoint works in order to maximize the use of the APIs and interact with the various objects and data.

Why is this important? It’s important because SharePoint is a robust platform that can be used to build a variety of applications that run inside or outside of SharePoint. With support for RESTful services, developers can leverage their existing Web development skills to enhance, expand and extend what SharePoint can do. A fundamental understanding of the building blocks of SharePoint will save a lot of time and frustration and allow developers to do what they do best while allowing SharePoint to do what it does best.

Read the full article at [http://www.itunity.com/article/integrating-angularjs-aad-office-365sharepoint-part-1-622](http://www.itunity.com/article/integrating-angularjs-aad-office-365sharepoint-part-1-622 &quot;http://www.itunity.com/article/integrating-angularjs-aad-office-365sharepoint-part-1-622&quot;).</content:encoded></item><item><title>Registering a Custom AngularJS Application with Azure Active Directory</title><link>https://blog.codewithdan.com/registering-a-custom-angularjs-application-with-azure-active-directory/</link><guid isPermaLink="true">https://blog.codewithdan.com/registering-a-custom-angularjs-application-with-azure-active-directory/</guid><pubDate>Sun, 07 Dec 2014 00:00:00 GMT</pubDate><content:encoded>If you’re working with Azure and need to add authentication and identity management into an application look no further than [Azure Active Directory](http://azure.microsoft.com/en-us/services/active-directory/) (AAD). AAD provides a robust set of services for single sign-on, authentication, multi-factor authentication, and more. Rather than setting up a custom authentication provider for an app, you can leverage existing functionality provided by AAD.

To associate a custom application with AAD you first need to register it. If you’ve ever registered a custom app with Facebook, Twitter, or another service you’ll find the AAD app registration process to be quite similar. When you register your application, AAD will generate a client ID and password that can be configured in your app and act as  a “hook” between the two. Once that’s done the application can direct all authentication actions to AAD and upon successful authentication, AAD can redirect the user back to a page in your custom app. This causes identity and access tokens to be provided to your application by AAD. An identity token has information about a given authenticated user (in addition to other details) while an access token provides temporary access to a secured resource (such as Office 365 services) that is also registered with AAD.

In a [previous post](https://weblogs.asp.net/dwahlin/building-applications-with-angularjs-azure-active-directory-and-office-365-sharepoint) I discussed a series of articles that I’m writing about AngularJS, Azure Active Directory, and Office 365/SharePoint for [itunity.com](http://www.itunity.com/). The first article introduced the application and provided an overview of the technologies used while the second article walks through how to register the custom application with AAD. Although I can’t post the entire article, here’s a snippet from it with a link to the complete article below.

## Integrating AngularJS with Azure Active Directory and Office 365/SharePoint, Part 2  

[Part 1](http://www.itunity.com/article/integrating-angularjs-azure-active-directory-office-365sharepoint-part-1-622) of this article series introduced a stand-alone Expense Manager application built using AngularJS, Azure Active Directory, and the Office 365/SharePoint APIs. The overall scenario discussed throughout the series revolves around integrating data from the cloud into an application that can stand on its own and run gracefully in desktop browsers and mobile/tablet browsers. If you haven&apos;t read through [Part 1](http://www.itunity.com/article/integrating-angularjs-azure-active-directory-office-365sharepoint-part-1-622), I encourage you to read it first before going through this article since it provides additional context about the application and the technologies used to build it. If you&apos;re new to Microsoft Azure or Office 365 cloud services, I also recommend that you learn more about them at [http://azure.microsoft.com](http://azure.microsoft.com/) and [http://products.office.com/business/enterprise-productivity-tools](http://products.office.com/business/enterprise-productivity-tools).

One of the essential requirements that enterprise applications (and many non-enterprise applications) have is authentication and identity management. Although you can certainly use on-premise deployments of Active Directory (AD) or another custom security provider, Azure Active Directory (AAD) integrates directly with Office 365/SharePoint APIs (in addition to several others) and lets you harness the power of the cloud. In this article, you&apos;ll see how a custom application such as the Expense Manager application discussed in Part 1 can be registered with Azure Active Directory so that users can be authenticated. In the next article in the series, we&apos;ll explore the AAD code that&apos;s required and see how it plugs into an ASP.NET MVC application.

So what is Azure Active Directory and how do you get started using it? In a nutshell, it provides &quot;Identity and Access Management for the Cloud&quot; (see [http://azure.microsoft.com/services/active-directory](http://azure.microsoft.com/services/active-directory/) for additional details). By using the Azure management website, you can setup and integrate AAD authentication and identity management into custom applications and also use it to secure Office 365/SharePoint deployments. AAD has many additional features, but the Expense Manager application discussed in this series uses it strictly for authentication purposes so that Office 365/SharePoint lists can be accessed and modified.

You can manage your Azure account and AAD functionality by going to [https://manage.windowsazure.com](https://manage.windowsazure.com/). A new Azure management portal ([https://portal.azure.com](https://portal.azure.com/)) is also available but it&apos;s currently in beta (as I’m writing this article) and doesn&apos;t offer AAD features yet. It&apos;s important to point out that going through the Azure management site isn&apos;t the only solution for registering a custom application with AAD so that users can authenticate. When working with Office 365/SharePoint APIs, you can simplify the overall process by using the Microsoft Office 365 API Tools, which is another topic that will be covered in this article.

Let&apos;s get started by taking a look at how to register a custom application with AAD using the Azure management site. Once you see how to manually register an application with AAD, you&apos;ll then learn about the Microsoft Office 365 API Tools and see how they can be installed and used to register an application from within Visual Studio.  

### Registering an application with Azure Active Directory  

In order to add AAD authentication functionality into the Expense Manager application (or any custom application), it has to be registered with AAD. The standard way to register an application is to login to your Azure account ([https://manage.windowsazure.com](https://manage.windowsazure.com/)) and click the _Active Directory_ item on the left.  

[](http://www.itunity.com/content/content/720/wahlin_aad-o365tools_1.png)

Figure 1: Login to Azure and click Active Directory to begin the process of registering an application with AAD  

Once you click on Active Directory in the Azure management site, the next screen lets you select the directory that the application should be associated with. By default you&apos;ll see a _Default Directory_ entry (Figure 2) unless you&apos;ve created a custom directory.  

[](http://www.itunity.com/content/content/720/wahlin_aad-o365tools_2.png)

Figure 2: Select the directory that your application should be associated with  

Read the full article at [http://www.itunity.com/article/integrating-angularjs-o365-aad-registering-custom-app-720](http://www.itunity.com/article/integrating-angularjs-o365-aad-registering-custom-app-720 &quot;http://www.itunity.com/article/integrating-angularjs-o365-aad-registering-custom-app-720&quot;).</content:encoded></item><item><title>Getting Started with ES6 – The Next Version of JavaScript</title><link>https://blog.codewithdan.com/getting-started-with-es6-the-next-version-of-javascript/</link><guid isPermaLink="true">https://blog.codewithdan.com/getting-started-with-es6-the-next-version-of-javascript/</guid><pubDate>Sat, 06 Dec 2014 00:00:00 GMT</pubDate><content:encoded>[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/57c59b2be72b_127DE/image_8.png)

JavaScript has come a long ways in its 20 years of existence. It’s grown from a language used to define a few variables and functions to one that can be used to build robust applications on the client-side and server-side. Although it’s popularity continues to grow in large part due to its dynamic nature and ability to run anywhere, JavaScript as a language is still missing many key features that could help increase developer productivity and provide a more maintainable code base. Fortunately, [ECMAScript 6](http://wiki.ecmascript.org/doku.php?id=harmony:specification_drafts) (ES6) adds many new features that will take the language to the next level.

This is the first post in a series I’ll be writing that will walk through key features in ES6 and explain how they can be used. I’ll also introduce tools and other languages along the way that can be used to work with ES6 in different browsers as well as on the server-side with frameworks like [Node.js](http://nodejs.org/). The goal of this first post is to discuss the viability of using ES6 today and point out resources that can help you get started using it. Let’s kick things off by talking about a few of the key features in ES6.

## Key Features in ES6

So what are some of the key features in ES6? Here’s a list of some of my favorites:

- **Arrow functions** – A short-hand version of an anonymous function.
- **Block-level scope** – ES6 now supports scoping variables to blocks (if, for, while, etc.) using the _let_ keyword.
- **Classes** – ES6 classes provide a way to encapsulate and extend code.
- **Constants** – You can now define constants in ES6 code using the _const_ keyword.
- **Default parameters** – Ever wished that a function parameter could be assigned a default value? You can do that now in ES6.
- **Destructuring** – A succinct and flexible way to assign values from arrays or objects into variables.
- **Generators** – Specialized functions that create iterators using _function\*_ and the _yield_ keyword.
- **Map** – Dictionary type object that can be used to store key/value pairs.
- **Modules** – Provides a modular way of organizing and loading code.
- **Promises** – Used with async operations.
- **Rest parameters** – Replaces the need for using _arguments_ to access functions arguments. Allows you to get to an array representing “the rest of the parameters”.
- **Set** – A collection object that can be used to store a list of data values.
- **Template Strings** – Clean way to build up string values.

Looking through the features a few things may jump out at you. First off, there’s support for classes and class extension. If you’ve been working with a language such as Java, C#, C++, Python or the many others that support classes this will probably be something you welcome with open arms. By having support for classes we can eliminate some of the patterns we’ve used in the past (Revealing Module Pattern, etc.) in many cases since encapsulation of code will now be natively supported.

Several other key features listed include block level scope, different types of collection objects called Map and Set, built-in support for Promises (deferred/async objects), a concise function syntax referred to as arrow functions, default parameter value assignment, plus much more. I’ll be discussing these features in greater detail in future posts.

## Should I Use ES6 Now or Wait?

One of the most frequent questions I get about ES6 in my training and consulting work is, “Should I use ES6 now or wait until it has better support?”. There isn’t a single “correct” answer because it depends upon a range of factors such as:

- Where will your application will be deployed (client-side or server-side)?
- What browsers have to be supported?
- What’s the developer environment look like?
- What tools do developers have access to use (in an enterprise environment for example)
- Are you updating an existing app or starting a new one?

  
The good news is that tools exist that enable you to use ES6 today in a variety of situations. However, you’ll need to factor in your unique environment and application requirements before deciding if you start using ES6 now or wait until later. I’m a big fan of the quote, “Use the right tool for the right job”.

To start to answer the question of whether or not you should use ES6 now or if you should wait, you need to look at where you’ll be using it. Will it be on the server with a framework like Node.js? If that’s the case then you can focus on what the V8 JavaScript engine supports. Will the code be running directly in a browser or embedded in a mobile app container like Cordova? In that case using ES6 may be more of a challenge and has to be thought through more carefully if you plan to use native browser support.

If you’re only updating a portion of an existing ES5/ES3 application then ES6 may not be needed given that most of the code base is already ES5/ES3. You’ll have to make that call. On the flipside, if you’re starting a project from scratch (a “greenfield” project) then I think ES6 should be seriously considered since you’ll be setting up well for the future.

The good news is that native ES6 support in browsers such as Chrome, Firefox, and Opera is getting better and better every week. Although Internet Explorer is playing a bit of catch up, they’re planning ES6 support as well (see [https://status.modern.ie](https://status.modern.ie) for more details). While modern browsers support various ES6 features, ES6 as a whole isn’t quite ready for “prime time” if you need all of the key language features available at once. Here’s a snapshot in time from [http://kangax.github.io/compat-table/es6](http://kangax.github.io/compat-table/es6) that shows the status of some of the ES6 features across browsers. The information in the image will certainly change so check the website for a more up-to-date view.  

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/57c59b2be72b_127DE/image_2.png)

  
You’ll note that Internet Explorer 6-9 aren’t listed in the matrix (by default anyway) since they won’t be getting ES6 features - ever. If you still have to support one or more of those browsers your first thought is probably, “I can’t use ES6!”. Although that appears to be the case initially, it’s not true. I’ll introduce some tools that will let you convert ES6 code back to ES5 so that it works in older browsers. Of course there are always limitations that come up depending on application requirements, but at least you’ll know the alternatives out there which should allow you to make a more educated decision about moving to ES6 or sticking with ES5/ES3 for the time being.

If you’re one of the fortunate developers that can target a modern browser that already has some ES6 support in it, you’ll still find that some of the features aren’t supported. The story is improving every week but if you can’t count on specific features being there then it’s hard to justify moving to ES6 today.

As for me personally, I’m moving to ES6 now and even building some applications today that rely on ES6. Does that mean I’m ignoring all of the older browsers out there? Nope – not at all. There are techniques that can be leveraged to use ES6 today while still generating ES5 code that runs fine in older browsers. Let’s take a look at some of the tools that are available.  
  

## Transpilers – Converting Code from One Syntax to Another

Earlier I mentioned that’s it’s possible to use ES6 today while still targeting older browsers. This is made possible through special converters referred to as “transpilers” (sometimes called “transcoders”) that can convert ES6 code to ES5. By integrating a transpiler into your development workflow you can write ES6 code that automatically gets converted to ES5 code using JavaScript task managers like Grunt or Gulp. Two of the most popular transpilers are:

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/57c59b2be72b_127DE/image_4.png)

- [Traceur](https://github.com/google/traceur-compiler) – Open source transpiler started by Google that maps ES6 to ES5.
- [Babel](https://babeljs.io/) – Open source transpiler that maps ES6 directly to ES5. It outputs the cleanest code in my opinion.  [![image_thumb[1]](/images/blog/getting-started-with-es6-the-next-version-of-javascript/image_thumb%5B1%5D_thumb.png &quot;image_thumb[1]&quot;)](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/57c59b2be72b_127DE/image_thumb%5B1%5D_2.png)

In addition to transpilers, you can also use languages such as [TypeScript](http://www.typescriptlang.org/) to write ES6 code that gets converted to ES5 (or even ES3). TypeScript is a superset of JavaScript that lets you write “typed” ES6 code (define numbers, strings, booleans, etc.) while still allowing it to work in any browser out there by “compiling” it to ES5/ES3. I’ll be showing TypeScript code as well in this series when it applies to a specific ES6 feature.

In the next post in this series I’ll discuss how to get started using transpilers since I think they’re essential to understand and integrate into your workflow if you want to start using ES6 today.

## Browser-based ES6 Playgrounds

  
In addition to the transpilers mentioned above, there are also sites and browser extensions available that let you play around with ES6 directly in the browser:

- [ES6 Repl](https://chrome.google.com/webstore/detail/es6-repl/alploljligeomonipppgaahpkenfnfkn?utm_source=chrome-ntp-icon) – Chrome extension that allows you to work with ES6 code directly in the Chrome developer tools.
- [ES6 Fiddle](http://www.es6fiddle.com/) – Website that allows you to type in ES6 code and have it converted to ES5.
- [Traceur Transcoding Demo](https://google.github.io/traceur-compiler/demo/repl.html#) – ES6 playground based on Traceur.
- [Babel REPL](https://babeljs.io/repl/) – ES6 playground based on Babel.
- [TypeScript Playground](http://www.typescriptlang.org/Playground) – TypeScript playground that converts code to ES5.

## ES6 Resources

In the posts that follow I’ll be discussing ES6 features and showing how they can be used. In the meantime I think it’s important to know about the different resources that are out there. Here’s a list of some sites that I refer to often:

- [ES6 Language Specification](http://wiki.ecmascript.org/doku.php?id=harmony:specification_drafts) – The spec that defines all of the planned features in ES6. An unofficial HTML version can be [found here](https://people.mozilla.org/~jorendorff/es6-draft.html).
- [ES6 Features in One Page](http://espadrine.github.io/New-In-A-Spec/es6/) – Short synopsis of key ES6 features.
- [ES6 Features](https://github.com/lukehoban/es6features) – Quick look at ES6 features by Luke Hoban.
- [ES.next Showcase –](https://github.com/sindresorhus/esnext-showcase) List of projects that are using or planning to use ES6.
- [ECMAScript 6 Support in Mozilla](https://developer.mozilla.org/en-US/docs/Web/JavaScript/New_in_JavaScript/ECMAScript_6_support_in_Mozilla) – Details about ES6 support in Firefox.
- [ES6 Browser Support Matrix](http://kangax.github.io/compat-table/es6) – Matrix showing ES6 support across browsers and more.
- [Understanding ECMAScript 6](https://leanpub.com/understandinges6/read) – Online book by Nicholas C. Zakas.
- [ES6 Samples](https://github.com/DanWahlin/ES6Samples) – A repository of samples I’ve put together that will continue to be enhanced and extended over time.

## Conclusion

  
Although ES6 still has a ways to go as far as browser support,  I’m excited about the different features it offers and the future of JavaScript development. Whether or not you plan on using it right away, it’s important to understand what it offers so that you’re prepared and ready for the (near) future. While this post discussed some of the factors to consider before making the jump to ES6 and covered a few features, tools, and resources, there’s a lot more to discuss.

Stay tuned for the next post covering ES6 tools and how they can be used to write ES6 code today while still supporting a variety of browsers.</content:encoded></item><item><title>My Thoughts on AngularJS 1.3 and 2.0</title><link>https://blog.codewithdan.com/my-thoughts-on-angularjs-1-3-and-2-0/</link><guid isPermaLink="true">https://blog.codewithdan.com/my-thoughts-on-angularjs-1-3-and-2-0/</guid><pubDate>Fri, 31 Oct 2014 00:00:00 GMT</pubDate><content:encoded>&lt;script type=&quot;text/javascript&quot;&gt;// &lt;![CDATA[ document.write(getHtmlFragment(&apos;angularjsGetStarted&apos;)); // ]]&gt;&lt;/script&gt;

  
 I&apos;ve received a ton of questions on [Twitter](http://asp.us7.list-manage.com/track/click?u=65e00aa00c80d98f762ebeb6e&amp;id=4086649803&amp;e=8b4b262c4b) and through email about the AngularJS 2.0 announcement. Questions such as &quot;What&apos;s going on with AngularJS?&quot; and &quot;Should I start a new AngularJS 1.3 project when AngularJS 2.0 looks quite different?&quot;. Many people are excited about the modern approach the Angular team is taking with 2.0, a few seem to be predicting doom and gloom, and a few more are worried about what they should do.  As a result of all the questions I decided to put together a quick post since 140 characters on Twitter isn’t really enough.

## Stay on 1.3 or Move to Another Framework?

While every situation is different and maintenance certainly has to be evaluated carefully with any software project, I personally like what I&apos;ve seen so far (2.0 is still very early though and may certainly change) and plan to build SPAs in the near term using Angular 1.3. Why? Because it just came out, it gets the job done very well, I don’t want to jump to another framework at this point (who&apos;s to say they won&apos;t be making the move to ES6 as well in the near future to stay relevant?), I have confidence in the AngularJS team, and Angular 1.3 makes me more productive. I&apos;ll definitely be keeping a close eye on Angular 2.0 though over the next year or so as it develops.

## Jumping to ES6

Yes, AngularJS 2.0 appears to be a big jump from 1.3 - a HUGE jump some might say! But, I think it&apos;s necessary to be ready for a future that includes technologies like ECMASCript 6, Web Components, and more. Some of the features currently in 1.3 that will go away in 2.0 disappear &quot;naturally&quot; as a result of ES6 functionality such as modules and classes. That&apos;s a good thing in my opinion. New features announced for 2.0 provide a cleaner, more modern way of building Web applications such as moving directives to Web Components (another good thing IMO but will likely cause some app migration work). I&apos;d recommend [reading the post](http://asp.us7.list-manage.com/track/click?u=65e00aa00c80d98f762ebeb6e&amp;id=7aaa402b46&amp;e=8b4b262c4b) from the AngularJS team and taking time to watch some of the videos from [ng-europe](http://asp.us7.list-manage.com/track/click?u=65e00aa00c80d98f762ebeb6e&amp;id=5f1dc3d769&amp;e=8b4b262c4b) (start with [this one](https://www.youtube.com/watch?v=lGdnh8QSPPk) and then watch [this one](https://www.youtube.com/watch?v=gNmWybAyBHI)). I&apos;m excited about what they&apos;re planning even if it means extra work may possibly be required down the road to migrate from 1.3 to 2.0 (we don&apos;t know how much yet since 2.0 is in the early stages). Syntax changes that ultimately improve the way I write future applications don’t bother me although I acknowledge it can cause extra work and has to be planned for appropriately.

## Things to Consider

Every company will have to consider if they stick with AngularJS 1.3 for awhile or if they jump ship and move to something else. It&apos;s not like an app has to move to 2.0 once it finally comes out. But, if part or all of the app has to be ported to a new version at some point then that&apos;s something to carefully think over. I&apos;ve been in the position of maintaining a large number of apps and realize that the decision to stay on 1.3 or go in a different direction is a hard one to make. I can sympathize with companies and developers who are in that position.  
  
I don&apos;t know if the migration process will be easy or hard - time will tell. I don&apos;t currently like any of the other options any better than Angular 1.3 though (that&apos;s a very personal preference ...many other viable and capable solutions certainly exist) which is why I&apos;m OK with using it over the next year. When 2.0 is finally released I&apos;ll have to evaluate if I (and my company’s clients) stay on 1.3 or take the time to migrate the app. I won&apos;t pretend to know what&apos;s best for your company as every company has unique requirements and it’s something

 you have to talk over with your team. There’s no easy answer there and I totally get that. It really boils down to whether or not you&apos;re OK with using AngularJS 1.3 for awhile, if you want to learn an entirely different framework (which may possibly shift due to ES6 and other technologies being released), or if you want to write something custom. I&apos;m fine with what Angular 1.3 offers and like the fact that at least I know what to expect (or some of what to expect) when 2.0 is released down the road.  
  
In hindsight (everyone loves to play Monday morning quarterback) I wish a more jQuery-ish approach would&apos;ve been announced with a 1.x and 2.x branch. That would put to rest many of the version and migration fears that people have. Who knows - maybe the AngularJS team will end up doing something like that given that over 1600 apps inside of Google rely on Angular.

## Change is Guaranteed

My good friend [Shawn Wildermuth](http://asp.us7.list-manage2.com/track/click?u=65e00aa00c80d98f762ebeb6e&amp;id=423aabde3d&amp;e=8b4b262c4b) sums all of this up nicely in his latest post titled, [It is Too Soon to Panic on AngularJS 2.0](http://asp.us7.list-manage1.com/track/click?u=65e00aa00c80d98f762ebeb6e&amp;id=2b7f0732d0&amp;e=8b4b262c4b). The last paragraph in the post states the following (he has a lot of other good points so please read the whole post):  
  
&quot;I don’t know what you should do. I don’t. I won’t pretend to. Keep an open mind and keep your ears open. See where it’s going and under the current and future risks in pinning yourself to the library. You’ll have to hook your wagon to some horse (AngularJS means to Google, ReactJS to Facebook, etc.) But ultimately as I’ve been talking about for years, be prepared for change and look forward to learning what is coming around the corner.&quot;  
  
If you work in the Web world you have to expect, be prepared for, and plan on change. Some of the other frameworks out there will likely be changing as well (maybe big changes, maybe small changes) due to the new technologies coming out. If they don’t change they’ll be considered “old” in the near future as ES6 makes its way onto the scene. The challenge is deciding on how you’ll deal with change and the approach to take moving forward. As Shawn points out, no one (including myself) has all the answers. I always go with my gut feeling whenever possible.  It’s quite possible that in a year or so I may do a pivot to another technology. I’ve had to do that many times over my career. At this point I’m happy with what 1.3 has to offer though.

Some will agree with what I say and some will disagree (quite loudly in some cases). That’s part of the fun of being a Web developer.</content:encoded></item><item><title>Cancelling Route Navigation in AngularJS</title><link>https://blog.codewithdan.com/cancelling-route-navigation-in-angularjs/</link><guid isPermaLink="true">https://blog.codewithdan.com/cancelling-route-navigation-in-angularjs/</guid><pubDate>Sun, 19 Oct 2014 00:00:00 GMT</pubDate><content:encoded>![](/images/blog/cancelling-route-navigation-in-angularjs/AngularJS_thumb_1008B166.webp)

&lt;script type=&quot;text/javascript&quot;&gt;// &lt;![CDATA[ document.write(getHtmlFragment(&apos;angularjsGetStarted&apos;)); // ]]&gt;&lt;/script&gt;

**This post has been updated to cover AngularJS 1.3+.  
**

Routing provides a nice way to associate views with controllers in AngularJS using a minimal amount of code. While a user is normally able to navigate directly to a specific route, there may be times when a user triggers a route change before they’ve finalized an important action such as saving data.  In this type of situation you may want to cancel the route navigation and ask the user if they’d like to finish what they were doing so that their data isn’t lost. In another situation the user may try to navigate to a view that requires some type of login or other special handling. If the user hasn’t logged in yet the app can automatically redirect them to a login screen (keep in mind that for security the server always has to double-check things since JavaScript can easily be changed).

When route navigation occurs in an AngularJS application a few key events are raised. One is named **$locationChangeStart** and the other is **$routeChangeStart**. Starting with AngularJS 1.3+ this is the order that events fire (see [this commit](https://github.com/angular/angular.js/commit/f4ff11b01e6a5f9a9eb25a38d327dfaadbd7c80c) by [Tobias Bosch](https://github.com/tbosch) for more details):

  
**$locationChangeStart - - $routeChangeStart - - $locationChangeSuccess - - $routeChangeSuccess**

Both events allow route navigation to be cancelled starting with AngularJS 1.3+ by calling **preventDefault()** on the **event** object.  Let’s take a look at the **$locationChangeStart** event first and see how it can be used to cancel route navigation when needed.

## The $locationChangeStart Event

  
If you dig into the AngularJS core script you’ll find the following code that shows how the **$locationChangeStart** event is raised as the **$browser** object’s **onUrlChange()** function is invoked:

```
$browser.onUrlChange(function(newUrl, newState) {
    $rootScope.$evalAsync(function() {
        var oldUrl = $location.absUrl();
        var oldState = $location.$$state;

        $location.$$parse(newUrl);
        $location.$$state = newState;
        if ($rootScope.$broadcast(&apos;$locationChangeStart&apos;, newUrl, oldUrl,
            newState, oldState).defaultPrevented) {
            $location.$$parse(oldUrl);
            $location.$$state = oldState;
            setBrowserUrlWithFallback(oldUrl, false, oldState);
        } else {
            initializing = false;
            afterLocationChange(oldUrl, oldState);
        }
    });
    if (!$rootScope.$$phase) $rootScope.$digest();
});
```

  
  
The key part of the code is the call to **$broadcast**. This call broadcasts the **$locationChangeStart** event to all child scopes so that they can be notified before a location change is made. To handle the **$locationChangeStart** event you can use the **$scope.$on()** function (in a controller for example) or you can use **$rootScope.$on()** (in a factory or service for example). For this example I’ve added a call to **$on()** into a function that is called immediately after the controller is invoked to watch for location changes:

```
function init() {
    
    //initialize data here..    

    //Make sure they&apos;re warned if they made a change but didn&apos;t save it
    //Call to $on returns a &quot;deregistration&quot; function that can be called to
    //remove the listener (see routeChange() for an example of using it)
    onRouteChangeOff = $scope.$on(&apos;$locationChangeStart&apos;, routeChange);
}

function routeChange(event, newUrl) {
   ...
}
```

  
This code listens for the **$locationChangeStart** event and calls **routeChange()** when it occurs. The value returned from calling **$on** is a “deregistration” function that can be called to detach from the event. In this case the deregistration function is named **onRouteChangeOff** (it’s accessible throughout the controller in this example). You’ll see how the **onRouteChangeOff** function is used in just a moment.

## Cancelling Route Navigation with $locationChangeStart

The **routeChange()** callback triggered by the **$locationChangeStart** event displays a modal dialog similar to the following to prompt the user:

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/7dac51414bc7_13775/image_4.png)

Here’s the code for **routeChange()**:

```
function routeChange(event, newUrl, oldUrl) {
    //Navigate to newUrl if the form isn&apos;t dirty
    if (!$scope.editForm || !$scope.editForm.$dirty) return;

    var modalOptions = {
        closeButtonText: &apos;Cancel&apos;,
        actionButtonText: &apos;Ignore Changes&apos;,
        headerText: &apos;Unsaved Changes&apos;,
        bodyText: &apos;You have unsaved changes. Leave the page?&apos;
    };

    modalService.showModal({}, modalOptions).then(function (result) {
        if (result === &apos;ok&apos;) {
            onRouteChangeOff(); //Stop listening for location changes
            $location.path($location.url(newUrl).hash()); //Go to page they&apos;re interested in
        }
    });

    //prevent navigation by default since we&apos;ll handle it
    //once the user selects a dialog option
    event.preventDefault();
    return;
}
```

Looking at the parameters of **routeChange()** you can see that it accepts an **event** object and the new route that the user is trying to navigate to. The **event** object is used to prevent navigation since we need to prompt the user before leaving the current view. In this example we’re checking if the form is dirty (changes have been made) and if the user hasn’t saved the changes yet. In cases where the form is dirty the user can be notified and given a change to stay on the current view.

As the code in **routeChange()** executes a modal dialog is shown by calling **modalService.showModal()** (see my [previous post](http://weblogs.asp.net/dwahlin/archive/2013/09/18/building-an-angularjs-modal-service.aspx) for more information about the custom **modalService** that acts as a wrapper around Angular UI Bootstrap’s $modal service). From there the route navigation is cancelled at the end of the function (event.preventDefault()) since the user needs to choose if they want to stay on the view and finish their edits or leave the view and navigate to a different location.

If the user selects “Ignore Changes” then their changes will be discarded and the application will navigate to the route they intended to go to originally. This is done by first detaching from the **$locationChangeStart** event by calling **onRouteChangeOff()** (recall that this is the function returned from the call to **$on()**) so that we don’t get stuck in a never ending cycle where the dialog continues to display when they click the “Ignore Changes” button. A call is then made to **$location.path($location.url(newUrl).hash())** to handle navigating to the target view. If the user cancels the operation they’ll stay on the current view.

## The $routeChangeStart Event

The **$locationChangeStart** event isn’t the only game in town with AngularJS. Within **angular-route.js** you’ll find the following function that raises a **$routeChangeStart** event as a route is about to be changed:  
  
  

```
function prepareRoute($locationEvent) {
    var lastRoute = $route.current;

    preparedRoute = parseRoute();
    preparedRouteIsUpdateOnly = preparedRoute &amp;&amp; lastRoute &amp;&amp; preparedRoute.$$route === lastRoute.$$route
        &amp;&amp; angular.equals(preparedRoute.pathParams, lastRoute.pathParams)
        &amp;&amp; !preparedRoute.reloadOnSearch &amp;&amp; !forceReload;

    if (!preparedRouteIsUpdateOnly &amp;&amp; (lastRoute || preparedRoute)) {
        if ($rootScope.$broadcast(&apos;$routeChangeStart&apos;, preparedRoute, lastRoute).defaultPrevented) {
            if ($locationEvent) {
                $locationEvent.preventDefault();
            }
        }
    }
}
```

  
  
Looking through the code you’ll see that a call is made to **$rootScope.$broadcast** to raise the **$routeChangeStart** event.

How does this event fit in with **$locationChangeStart** since they sound quite similar? When **$locationChangeStart** fires you get access to the new URL the user is trying to go to as well as the old URL as strings. When **$routeChangeStart** fires you can get access to the raw route definition defined using **$routeProvider** (more on this in a moment). This can be useful if you want to cancel a route based upon data provided in the route definition.

For example, let’s say that before a user navigates to certain routes they need to be redirected to a login page if one of our AngularJS factories determines that they haven’t logged in yet. Keep in mind that there’s no such thing as client-side security in this scenario and the redirect could always be hacked (quite easily) using browser developer tools. As a result your server-side code should always double-check security for secured pages/views, data, and more. But, a factory can certainly track if a user has logged in or not and then redirect them. If someone hacks the script the server would detect it assuming things are setup correctly. Let’s take a closer look at the **$routeChangeStart** event and how it can be used to cancel route navigation or redirect a user to another route.

## Cancelling Route Navigation with $routeChangeStart

  
How do you determine whether or not a route has to be handled in a special way as a user tries to navigate to it? Although AngularJS doesn’t have anything built-in, you can always add your own properties onto routes defined in an app. For example, here’s an example of a route that includes a custom property named **secure**:

```
$routeProvider.when(&apos;/customeredit/:customerId&apos;, {
    controller: &apos;CustomerEditController&apos;,
    templateUrl: viewBase + &apos;customers/customerEdit.html&apos;,
    secure: true //This route requires an authenticated user
});
```

This code specifies that the route requires extra handling and requires the user to be logged in before navigating to it. In addition to marking a route as “secure”, you could also define additional information such as allowed roles or any other special requirements (keeping in mind that the server always has to double-check security and that you would never want to push anything super sensitive security-wise down to the client). As a route is about to change you can listen for the **$routeChangeStart** event and use the event parameters to get access to the route definition - including any custom properties. Here’s an example of hooking up to the event using **$rootScope.$on()** in Angular’s **run()** function:  
  

```
app.run([&apos;$rootScope&apos;, &apos;$location&apos;, &apos;authService&apos;,
    function ($rootScope, $location, authService) {
            
        //Client-side security. Server-side framework MUST add it&apos;s 
        //own security as well since client-based “security” is easily hacked
        $rootScope.$on(&apos;$routeChangeStart&apos;, function (event, next, current) {
          //Look at the next parameter value to determine if a redirect is needed        
        });

}]);
```

When **$routeChangeStart** fires you get access to the event object, the next route, and the current route. It’s quite similar to **$locationChangeStart** although the parameter data is very different and provides a lot more information. In the case of **$routeChangeStart** the **next** parameter shown in the previous code will give you access to the following:

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/7dac51414bc7_13775/image_2.png)

Notice that by going through the **$$route** property you can get to the raw route definition data including the custom **secure** property. While going through any property that starts with **$$** is generally frowned upon due to the “private” nature of these properties (they could change in future versions), there isn’t a viable alternative at the present time when you need to get to the route data as the **$routeChangeStart** event is fired. I hope that will change in the future and that route data will be made more publicly accessible.

If you don’t like that option you could always have a factory or service (such as **authService** in the code above) store information about special routes and then evaluate the **newUrl** parameter passed by the **$locationChangeStart** event to determine if a given route needs to be cancelled or redirected. That would involve some string comparisons but could certainly get the job done.

As **$routeChangeStart** fires you can use the following code to trigger a redirect to another view if needed. Notice that the **secure** property shown earlier is checked and used to determine if the path needs to change:

```
if (next &amp;&amp; next.$$route &amp;&amp; next.$$route.secure) { 
    if (!authService.user.isAuthenticated) { 
        $rootScope.$evalAsync(function () { 
            $location.path(&apos;/login&apos;); 
        }); 
    } 
} 
```

Notice that the call to **$location.path()** is wrapped in the **$evalAsync()** method so that the location changes properly and everything stays in sync. If you don’t include that little bit of code things won’t work properly.  If you want to completely cancel route navigation you can also call **event.preventDefault()** as shown earlier with the **$locationChangeStart** event code. That’s a new feature that’s now available in AngularJS 1.3+.

## Conclusion  
  

The key to canceling routes is understanding how to work with the **$locationChangeStart** and **$routeChangeStart** events and cancelling or redirecting as needed . You can see this code in action in the [Customer Manager application](https://github.com/DanWahlin/CustomerManagerStandard) available on Github (specifically [customerEditController.js](https://github.com/DanWahlin/CustomerManagerStandard/blob/master/CustomerManager%2Fapp%2FcustomersApp%2Fcontrollers%2Fcustomers%2FcustomerEditController.js) and [app.js](https://github.com/DanWahlin/CustomerManagerStandard/blob/master/CustomerManager%2Fapp%2FcustomersApp%2Fapp.js)). Learn more about the [application here](https://weblogs.asp.net/dwahlin/learning-angularjs-by-example-the-customer-manager-application).</content:encoded></item><item><title>What’s “Right” with AngularJS?</title><link>https://blog.codewithdan.com/whats-right-with-angularjs/</link><guid isPermaLink="true">https://blog.codewithdan.com/whats-right-with-angularjs/</guid><pubDate>Tue, 07 Oct 2014 00:00:00 GMT</pubDate><content:encoded>There’s been a lot of discussion today on some email groups I’m involved with about a post titled [**What’s wrong with Angular.js**](https://medium.com/este-js-framework/whats-wrong-with-angular-js-97b0a787f903). I’d recommend reading through the post first before continuing but in a nutshell it makes it sound as if [AngularJS](https://angularjs.org/) isn’t a viable framework and should never be used. It also critiques two-way data binding which I found interesting (and misguided). It’s definitely a controversial post with some comments that made me laugh and others that I agreed with completely or in part.

I normally ignore these types of posts but in this case I wanted to address the points that were made since I suspect people new to AngularJS will wonder if they’re accurate or not.

## What’s “Right” with AngularJS?

I use AngularJS a lot now days and think that it has many superb features built-in out of the box. But, I’ll be the first to admit that it has some flaws as well.  The same can be said about any framework out there though. We normally gloss over flaws that we find in frameworks we like and point out flaws in frameworks we don’t like. Most of the flaws that I don’t like are being enhanced or added in [version 2](https://blog.angularjs.org/2014/03/angular-20.html?_escaped_fragment_=) which is great. It’s nice when a framework is open about its future and the AngularJS team has been very open even going to the effort of publishing weekly meeting notes. 

I wasn’t always a big fan of AngularJS but once I figured it out and learned about how it works I became a _huge_ fan. Having said that, I’m also very open to other options and don’t get all religious about languages, frameworks, etc. While I use AngularJS a lot today that may change down the road if I find something I think is better. Software is constantly evolving and keeping up is part of the fun.

Getting back to the _[What’s wrong with Angular.js](https://medium.com/este-js-framework/whats-wrong-with-angular-js-97b0a787f903)_ post, here are my thoughts on the main points that the author makes. All of the points highlighted below are directly from the post and provide some good talking points to cover what’s “right” with AngularJS.  
  

### 1\. “We are developers, we write and debug code. But Angular is HTML parser.”

AngularJS absolutely does a lot of HTML parsing. But, if you do any maintenance of apps or care much about productivity then the declarative approach AngularJS provides is refreshing. There’s no “one size fits all solution” out there of course but the declarative approach mixed with JavaScript code works out very well in many cases as several other frameworks have shown and proven out (Durandal, Ember, KnockoutJS library, desktop frameworks, etc.). Yes, directives are specific to Angular and make your views Angular-specific, but if you use [KnockoutJS](http://knockoutjs.com/) you use their _data-bind_ syntax (and would have to rip it out to change the data binding), if you use [Ember](http://emberjs.com/) you’re using [Handlebars](http://handlebarsjs.com/) (and would have to rip it out to change frameworks), and I can go on and on. There’s a reason Angular took the declarative approach and I applaud it.

Having said that, if you’re putting a lot of complex expressions into your AngularJS views, adding DOM code to controllers, not leveraging directives or filters where you should in views, etc. then the app can certainly get out of control, become hard to maintain, and feel “messy”. If you understand how AngularJS works though, know what to put into views and what to put into directives or filters, understand the boundaries, the role of controllers and factories/services, etc. then you can develop an extremely clean code base.

While on this topic, I saw a comment elsewhere about getting requests to rip out Angular from projects due to the directives and expressions in HTML and that it meant AngularJS was bad. We’re seeing the exact opposite actually with my company. We’re seeing a huge number of requests for new AngularJS project work and training (yes, that may bias me somewhat but I try to stay open minded to other solutions as well since we work with a variety of companies and not all of them do AngularJS). A company wanting to rip Angular out or wanting to start a new Angular project doesn’t imply Angular is good or bad since there are a variety of reasons people want to rip it out or add it in (developer skills, moving a different direction, just didn’t like it, love it, plus more). Every situation is unique and these “umbrella” statements that imply something is good or bad based on one scenario are just plain ignorant.  
  

### 2\. “Two way databinding is an anti-pattern.”

In some situations that may be true I suppose. But, for apps that collect and display a lot of data I’d have to disagree completely. I guess KnockoutJS, Ember, Durandal and even Desktop frameworks like WPF and all the other options out there for two-way data binding have it all wrong then right (even though they’re working fine and many developers would praise them)? Based on the “Binding from model to view is ok, but from view to model, it’s completely different story!” comment in the post I have to wonder what types of apps the author is building. We build a lot of Line of Business apps and Angular has been a pleasure to work with in that scenario. Is it appropriate for every type of app and does two-way data binding work well in all scenarios? Of course not – that goes without saying. If I was building an app that has to track a lot of  graphics on a page I’m going to pick a framework that is designed for that. If I’m doing an app that’s mainly read-only data then I’d have to consider if I need AngularJS or not.

There’s a comment next to that paragraph where the author also states, “While it certainly makes for a nifty demo, and works for the most basic CRUD, it doesn’t tend to be terribly useful in your real-world app.” This comment definitely gave me a good laugh having worked with two-way data binding for many years now (well before AngularJS came on to the scene). I guess those real-world apps many of us have built in AngularJS, KnockoutJS, WPF, and other declarative, two-way binding frameworks/libraries were all bogus and we were inventing reasons to use two-way data binding for nothing. Seriously though, two-way data binding absolutely has its place and I think he’s trying to apply a fairly narrow viewpoint to every situation which doesn’t work.

Bottom line - If Angular went away I’d still be a fan of two-way data binding. If someone wants to call it an “anti-pattern” that won’t change how I think of it because it’s proven itself out to me over the years. I’ve gone the other route where you manually handle getting the data back into the model layer from the UI and it’s not fun, very brittle, and much harder to maintain.

### 3\. “Dirty checking, accessors (Ember and Backbone), Object.observe and all that stuff. Wrong!”

This is related to the previous point actually since it’s tied to data binding in AngularJS. Dirty checking definitely isn’t the strongest feature of AngularJS so I think this is a somewhat valid point in specific scenarios with a ton of data binding going on. But, it gets the job done and does it well if you know about certain rabbit holes and how to avoid them. If you have too many bindings and a large amount of data you’ll know about it as the app becomes sluggish and be forced to optimize things more (which is a good thing). But, he even rags on Object.observe which has very limited support right now and is super early so I’m not sure what he’s basing his mobile battery “hungry dog” comment on. He might be right but there’s no way he can back up that statement at this stage.

How many data bindings/observables are you going to push down to a mobile device in reality anyway if you’re doing it right? Everything has trade-offs but I’m up for the productivity benefits dirty-checking and two-way data binding provides since I’m not seeing any performance problems right now in our apps.

### 4\. “Duplicated app structure with obsolete angular.module”.

Angular provides the ability to create modules that can be combined with other modules to promote re-use and simplified maintenance. That’s one of the big reasons I like it so this is another comment that made me laugh but also wonder exactly what the author meant. There are certainly improvements that can be made here (lazy-loading of modules, dynamic loading of specific scripts, etc.) but it works well and it’s going to get even better in version 2 as the entire framework becomes more modular - including things like dependency injection.  
  
I’m not sure where the author is going with this point given that Angular is very modular (not perfect…but more modular than many of the other options out there). You can definitely break an app up into little pieces for re-use or for team work items. If you design it properly you can completely switch out your client-side data layer (factories/services in Angular-speak) and never touch your views or controllers. You can completely switch out your backend and only have to update your factories/services (potentially – assuming the API changes). It all comes down to knowing how to put the different pieces together though.  
 

### 5\. “Angular is slow”

If I had a dollar for every time I’ve heard someone say that about a framework over the years I’d be retired! People love to make generalized statements like “Framework X is slow” if they don’t like it and want to turn others off to it. If you don’t know what you’re doing then Angular can absolutely get slow with large amounts data bindings. I can also make KnockoutJS slow, Ember slow, jQuery/DOM slow, and many other libraries as well by doing stupid things. That’s why they introduced directives like _ng-model-options_ so that you can have better control over binding and why they’re working with Object.observe for the future version.

The main point here is that any framework can be slow if you don’t know it well and understand how it works under the covers. Angular in particular isn’t slow but it can be made to be slow by developers. Is there room for improvement? Absolutely, but to call it “slow” is misleading at best and completely dependent on the type of app being built. Use the right tool for the right job!  
  
  

### 6\. “No server side rendering without obscure hacks”

Angular has absolutely ZERO server-side functionality since it’s a client-side framework. If switching your views’ _templateUrl_ to an MVC, Node.js/Express, Rails, etc. route to render a partial HTML fragment on the server-side that the SPA then consumes is a hack then I guess he’s right (if you haven’t done it, it’s very easy in my opinion).  I call them “Hybrid SPAs” and wrote a post about the [role of the server in SPAs](https://weblogs.asp.net/dwahlin/what-s-the-role-of-the-server-in-single-page-applications-spas) if you’re interested. However, I don’t think he’s talking about that here.

More than likely he’s referring to Search Engine Optimization (SEO) issues which is the Achilles’ heel of any client-side framework – which he conveniently fails to mention. If SEO is a key part of your app then SPAs may not be the best way to go in the first place. If you do want SEO then there are some “hacks” (see [this](http://www.yearofmoo.com/2012/11/angularjs-and-seo.html), [this](http://scotch.io/tutorials/javascript/angularjs-seo-with-prerender-io), and [this](http://www.ng-newsletter.com/posts/serious-angular-seo.html)) that can be used to render a static version of your site that can be crawled. These “hacks” are needed for the other client-side frameworks out there as well. [Google announced](https://developers.google.com/webmasters/ajax-crawling/) they’re starting to crawl JavaScript now so I suspect the story will get better with all JavaScript frameworks in the near future as other search engines follow.

### 7\. “Angular is hard to learn”

I’ll give him full points on this one - at least at first glance. I found it hard to learn at first – I admit it. Keep in mind that when I started there were very few docs though and next to no videos out there which made it much harder. The perceived barrier to entry is probably why my company is seeing a lot of AngularJS training requests now days.

But, it’s not nearly as hard as it may appear at first glance and once you know a few basic concepts (directives, scope, and controllers to get started) you can be up in running in no time at all. If you&apos;re brand new to it take 20 minutes and I&apos;ll show you how easy it is to get started [here](https://www.youtube.com/watch?v=tnXO-i7944M). Once you have that “light bulb” moment as I like to call it and understand how everything fits together it becomes much easier to work with.

Once you understand the available Lego blocks and how they fit together it’s not hard at all assuming you know HTML/CSS/JavaScript. You&apos;ll find that it&apos;s very empowering. Writing custom directives could certainly be made much easier since they can be a bit tricky (but not everyone needs to write custom directives), scope can get tricky in some scenarios, and there are a few other tricky things (when to use $apply, factory vs. service vs. provider, etc.). But I think you can find “hard” things in just about any framework out there. I’ve never seen Angular “promoted as easy framework for beginners” though as he states in the post although a beginner could get the feel of it quickly. Watch my [AngularJS in 60ish Minutes](https://www.youtube.com/watch?v=i9MHigUZKEM) video and by the end you’ll see that it’s not hard and actually very powerful once you understand all the pieces.  
  

### **8\. “Google does not use Angular in production for their flag apps like Gmail or Gplus.”**

Google is a big company with many technologies to choose from. Gmail was around WAY before Angular of course and given the extremely high usage of Gmail, AngularJS may not be the best solution there potentially. It’s hard to say. If Gmail or Google+ isn’t using the [Go language](http://golang.org/) does that make Go an irrelevant language?  
  

### 9\. “Vendor Lock”

Does anyone else get tired of hearing this? Get deep into any framework (OSS or from a company) and you’re “locked in”. There are over 800+ open source contributors last I heard to AngularJS so even if Google drops it at this point I’m confident it’ll take on a life of its own. Any framework that gets released by a corporation seems to have the “vendor lock” label associated with it now days. It’s the OSS purists’ response to anything tied to a corporation it seems. Now, keep in mind I’m a huge fan of OSS but have also been bitten by the “dead OSS project” challenge as well where I was “locked in” to a project that died on the vine as people got bored with it. It goes both ways (especially if you go with OSS projects that don’t have a lot of contributors or with an unproven company) but you should know that going into a project. Yes, AngularJS is driven by a team at Google right now but they’re also closely tied to a community of contributors. I personally like the stability that brings – best of both worlds.

What’s really funny about this is that I’ve seen a few of the anti-AngularJS crowd promoting [Facebook’s React](http://facebook.github.io/react/) library lately. Nothing wrong with that at all (I’ve only played with React briefly at this point so I can’t comment on the good or bad) but isn’t that the same type of “vendor lock-in”? There’s a lot of hypocrisy there.

### 10\. “Will be rewritten entirely soon, which is a good thing for framework, but pain for you.”

Yes, version 2.0 will change things but how do you know it’ll be a pain when it’s not even out yet? Even if it’s a big migration (and I don’t know that – pretending for the sake of argument), it’s not a pain for “you” since “you” would have to choose to move to V2 right away or keep going with the 1.3x branch. It’s not like when V2 is released that all of the other stuff instantly stops working and that the older version is irrelevant. At the pace of technology now days it’s not feasible for most enterprises (and even smaller companies) to constantly be moving to the latest and greatest version.

In AngularJS [version 2](https://blog.angularjs.org/2014/03/angular-20.html?_escaped_fragment_=) they’re taking the modern (ES6 with backward porting capability to ES5) and modular approach. I like what they’re doing and think it makes sense.

## Wrap Up

There’s not one framework, library or component that fits all scenarios perfectly. People always try to find that “one” solution but if you’ve been building software long enough then you know it doesn’t exist and probably never will. Some people prefer the minimal framework with more custom code. I say, “More power to them” if that’s appropriate for their target scenario. I like to build on top of frameworks that give me a lot of power at my fingertips whenever possible. Could the framework change? Could something better come out in the near future? I can reply with a confident “yes” to those types of questions. If you’re in the software business then you just accept that those things will happen and try to make the best decision based on the current app requirements, current environment, team skills, time-frame, and where you think things are heading in the future.

AngularJS is absolutely valuable when used properly in my experience so don’t read a post like that and take it as the “truth”. Yes –AngularJS has a few flaws that I don’t like (dirty checking could be better – and will be with Object.observe, routing needs a lot of work – and they’re doing that now, debugging certain issues like {{ }} on a blank screen, plus more) but I love the vast majority of what it offers and feel it gives a big boost to my productivity and the productivity of our clients. I like the existing dependency injection (the future DI is going to get even better), the modularity, core framework functionality, two-way data binding, and the simplified maintenance. Overall – I’m a big fan. Tomorrow that may change, but welcome to the world of software where planning for change comes as part of the job. Change is what has kept software exciting for me over the years.

At the risk of sounding “preachy” – make your own decisions. Don’t trust me, don’t trust the author of the post, or anyone else for that matter. The Web is crawling with supposed “experts” but every app is unique so build a prototype and prove out your framework/library of choice whether it’s AngularJS or any other framework.

To wrap this up, what do I think is “right” with AngularJS? A lot of things. Support for building SPAs, MV\* approach, two-way data binding, testability, modularity, separation of concerns, routing (native or 3rd party), animations, mobile-capable, good maintenance story, supported by a core team as well as tons of OSS contributors, and I could go on and on. But, if AngularJS goes away at some point and I have to pick up the “next best thing” I’m OK with that.

My friend John Papa also [put together a post](http://www.johnpapa.net/why-does-there-have-to-be-something-wrong-with-angularjs/) about the points here if you&apos;d like another perspective.</content:encoded></item><item><title>Creating Custom AngularJS Directives Part 7 – Creating a Unique Value Directive using $asyncValidators</title><link>https://blog.codewithdan.com/creating-custom-angularjs-directives-part-7-creating-a-unique-value-directive-using-asyncvalidators/</link><guid isPermaLink="true">https://blog.codewithdan.com/creating-custom-angularjs-directives-part-7-creating-a-unique-value-directive-using-asyncvalidators/</guid><pubDate>Mon, 06 Oct 2014 00:00:00 GMT</pubDate><content:encoded>## 

&lt;script type=&quot;text/javascript&quot;&gt;// &lt;![CDATA[ document.write(getHtmlFragment(&apos;angularjsGetStarted&apos;)); // ]]&gt;&lt;/script&gt;

## Creating Custom AngularJS Directives Series

&lt;script type=&quot;text/javascript&quot;&gt;// &lt;![CDATA[ document.write(getHtmlFragment(&apos;customDirectivesLinks&apos;)); // ]]&gt;&lt;/script&gt;

In a [previous post](https://weblogs.asp.net/dwahlin/building-a-custom-angularjs-unique-value-directive) I demonstrated how to build a unique value directive to ensure that an email address isn’t already in use before allowing a user to save a form. With changes in AngularJS 1.3+, several new features are available to clean up the previous version of the directive and make it easier to work with. In this post I’ll update the previous post, walk-through some of the new features in a directive called **wcUnique,** and show how a few of the new features can be applied.  The code shown is part of the [Customer Manager Standard](https://github.com/DanWahlin/CustomerManagerStandard) sample application that’s available on Github.

An example of the directive in action is shown next. In this example the email address shown is already in use by another user which causes the error message to be displayed.

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Using-asyncValidators-to-Build-a-Custom-_B9B3/image_2.png)

  
  
The directive is applied to the **email** input control using the following code:

```
&lt;input type=&quot;email&quot; name=&quot;email&quot; 
        class=&quot;form-control&quot;
        data-ng-model=&quot;customer.email&quot;
        data-ng-model-options=&quot;{ updateOn: &apos;blur&apos; }&quot;
        data-wc-unique
        data-wc-unique-key=&quot;{{customer.id}}&quot;
        data-wc-unique-property=&quot;email&quot;
        data-ng-minlength=&quot;3&quot;
        required /&gt;
```

You can see that a custom directive named **wc-unique** is applied as well as 2 properties named **wc-unique-key** and **wc-unique-property** (I prefer to add the **data-** prefix but it’s not required). Before jumping into the directive let’s take a look at an AngularJS factory that can be used to check if a value is unique or not.

## Creating the Factory

Unique value checks typically rely on a call to a back-end data store of some type. AngularJS provides services such as **$http** and **$resource** that are perfect for that scenario. For this demonstration I’ll use a custom factory named **dataService** that relies on **$http** to make Ajax calls back to a server. It has a function in it named **checkUniqueValue()** that handles the unique checks. Note that I don’t typically distinguish between the term “factory” and “service” as far as the name goes since they ultimately do the same thing and “service” just sounds better to me (personal preference).

```
(function () {

    var injectParams = [&apos;$http&apos;, &apos;$q&apos;];

    var customersFactory = function ($http, $q) {
        var serviceBase = &apos;/api/dataservice/&apos;,
            factory = {};

        factory.checkUniqueValue = function (id, property, value) {
            if (!id) id = 0;
            return $http.get(serviceBase + &apos;checkUnique/&apos; + id + &apos;?property=&apos; + property +               &apos;&amp;value=&apos; + escape(value)).then(
                function (results) {
                    return results.data.status;
                });
        };

        //More code follows

        return factory;
    };

    customersFactory.$inject = injectParams;

    angular.module(&apos;customersApp&apos;).factory(&apos;customersService&apos;, customersFactory);

}());
```

## Creating the Unique Value Directive

To handle checking unique values I created a custom directive named **wcUnique** (the wc stands for Wahlin Consulting – my company). It’s a fairly simple directive that is restricted to being used as an attribute. The shell for the directive looks like the following:

```
function () {

    var injectParams = [&apos;$q&apos;, &apos;dataService&apos;];

    var wcUniqueDirective = function ($q, dataService) {
        return {
            restrict: &apos;A&apos;,
            require: &apos;ngModel&apos;,
            link: function (scope, element, attrs, ngModel) {

                //Validation code goes here

            }
        };
    };

    wcUniqueDirective.$inject = injectParams;

    angular.module(&apos;customersApp&apos;).directive(&apos;wcUnique&apos;, wcUniqueDirective);

}());
```

As the directive is loaded the **link()** function is called which gives access to the current scope, the element the directive is being applied to, attributes on the element, and the **ngModelController** object. If you’ve built custom directives before (hopefully you’ve been reading my series on directives!) you’ve probably seen **scope**, **element** and **attrs** before but the 4th parameter passed to the **link()** function may be new to you. If you need access to the **ngModel** directive that is applied to the element where the custom directive is attached to you can “require” **ngModel** (as shown in the code above). ngModel will then be injected into the **link()** function as the 4th parameter and it can be used in a variety of ways including validation scenarios.  One of the new properties available in **ngModelController** with AngularJS 1.3+ is **$asyncValidators** (read more about it [here](https://docs.angularjs.org/api/ng/type/ngModel.NgModelController)) which we’ll be using in this unique value directive.

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Using-asyncValidators-to-Build-a-Custom-_B9B3/image_7.png)

Here’s the complete code for the **wcUnique** directive:

```
(function () {

    var injectParams = [&apos;$q&apos;, &apos;$parse&apos;, &apos;dataService&apos;];

    var wcUniqueDirective = function ($q, $parse, dataService) {
        return {
            restrict: &apos;A&apos;,
            require: &apos;ngModel&apos;,
            link: function (scope, element, attrs, ngModel) {
                ngModel.$asyncValidators.unique = function (modelValue, viewValue) {
                    var deferred = $q.defer(),
                        currentValue = modelValue || viewValue,
                        key = attrs.wcUniqueKey,
                        property = attrs.wcUniqueProperty;

                    //First time the asyncValidators function is loaded the
                    //key won&apos;t be set  so ensure that we have 
                    //key and propertyName before checking with the server 
                    if (key &amp;&amp; property) {
                        dataService.checkUniqueValue(key, property, currentValue)
                        .then(function (unique) {
                            if (unique) {
                                deferred.resolve(); //It&apos;s unique
                            }
                            else {
                                deferred.reject(); //Add unique to $errors
                            }
                        });
                    }
                    else {
                        deferred.resolve(); //Ensure promise is resolved if we hit this 
                     }

                    return deferred.promise;
                };
            }
        };
    };

    wcUniqueDirective.$inject = injectParams;

    angular.module(&apos;customersApp&apos;).directive(&apos;wcUnique&apos;, wcUniqueDirective);

}());
```

You’ll notice that the **ngModel** parameter that is injected into the **link()** function is used to access a property named **$asyncValidators**. This property allows async operations to be performed during the data validation process which is perfect when you need to go back to the server to check if a value is unique. In this case I created a new validator property named **unique** that is assigned to a function. The function creates a _deferred_ object and returns the promise. From there the code grabs the current value of the input that we’re trying to ensure is unique and also grabs the **key** and **property** attribute values shown earlier.

The **key** represents the unique key for the object (ultimately the unique identifier for the record). This is used so that we exclude the current object when checking for a unique value across objects on the server. The **property** represents the name of the object property that should be checked for uniqueness by the back-end system.

Once the variables are filled with data, the **key** and **property** values are passed along with the element’s value (the value of the textbox for example) to a function named **checkUniqueValue()** that’s provided by the **dataService** factory shown earlier. This triggers an Ajax call back to the server which returns a true or false value. If the server returns that the value is unique we’ll resolve the promise that was returned. If the value isn’t unique we’ll reject the promise. A rejection causes the **unique** property to be added to the **$error** property of the **ngModel** so that we can use it in the view to show and hide and error message.

## Showing Error Messages

The unique property added to the **$error** object can be used to show and hide error messages. In the previous section it was mentioned that the **$error** object is updated but how do you access the **$error** object? When using AngularJS forms, a **name** attribute is first added to the **&lt;form&gt;** element as shown next:

```
&lt;form name=&quot;editForm&quot;&gt;
```

The **editForm** value causes AngularJS to create a child controller named **editForm** that is associated with the current scope. In other words, **editForm** is added as a property of the current scope (the scope originally created by your controller). The textbox shown earlier has a name attribute value of **email** which gets converted to a property that is added to the **editForm** controller. It’s this **email** property that gets the **$error** object.  Let’s look at an example of how we can check the **unique** value to see if the email address is unique or not:

```
&lt;div class=&quot;col-md-2&quot;&gt;
    Email:
&lt;/div&gt;
&lt;div class=&quot;col-md-10&quot;&gt;
    &lt;!-- type=&quot;email&quot; causing a problem with Breeze so using regex --&gt;
    &lt;input type=&quot;email&quot; name=&quot;email&quot; 
            class=&quot;form-control&quot;
            data-ng-model=&quot;customer.email&quot;
            data-ng-model-options=&quot;{ updateOn: &apos;blur&apos; }&quot;
            data-wc-unique
            data-wc-unique-key=&quot;{{customer.id}}&quot;
            data-wc-unique-property=&quot;email&quot;
            data-ng-minlength=&quot;3&quot;
            required /&gt;
    &lt;!-- Show error if touched and unique is in error --&gt;
    &lt;span class=&quot;errorMessage&quot; ng-show=&quot;editForm.email.$touched &amp;&amp; editForm.email.$error.unique&quot;&gt;
        Email already in use
    &lt;/span&gt;
&lt;/div&gt;
```

Notice that the **ngShow** directive (on the span at the bottom of the code) checks the **editForm** property of the current scope and then drills down into the **email** property. It checks if the value is touched using the **$touched** property  (this property is in AngularJS 1.3+ and reports if the target control lost focus – it has nothing to do with a touch screen) and if the **$error.unique** value is there or not. If **editform.email.$error.unique** exists then we have a problem and the user entered an email that is already in use. It’s a little bit confusing at first glance since we’re checking if **unique** is added to the **$error** object which means the email is already in use (the unique property is in error). If it’s not on the **$error** object then then everything is OK and the user entered a unique value.

The end result is the red error message shown next:

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Using-asyncValidators-to-Build-a-Custom-_B9B3/image_10.png)

## Conclusion

Directives provide a great way to encapsulate functionality that can be used in views. In this post you’ve seen a simple AngularJS unique directive that can be applied to textboxes to check if a specific value is unique or not and display a message to the user. To see the directive in an actual application check out the Customer Manager sample application at [https://github.com/DanWahlin/CustomerManagerStandard](https://github.com/DanWahlin/CustomerManagerStandard).</content:encoded></item><item><title>Learning AngularJS by Example – The Customer Manager Application</title><link>https://blog.codewithdan.com/learning-angularjs-by-example-the-customer-manager-application/</link><guid isPermaLink="true">https://blog.codewithdan.com/learning-angularjs-by-example-the-customer-manager-application/</guid><pubDate>Mon, 01 Sep 2014 00:00:00 GMT</pubDate><content:encoded>&lt;script type=&quot;text/javascript&quot;&gt;// &lt;![CDATA[ document.write(getHtmlFragment(&apos;angularjsGetStarted&apos;)); // ]]&gt;&lt;/script&gt;

**Updated: 9/23/2014  
  
**

I’m always tinkering around with different ideas and toward the beginning of 2013 decided to build a sample application using [AngularJS](http://angularjs.org) that I call **[Customer Manager](https://github.com/DanWahlin/CustomerManagerStandard)**. The goal of the application is to highlight a lot of the different features offered by AngularJS and demonstrate how they can be used together. I also wanted to make sure that the application was approachable by people new to Angular since I’ve never found overly complex applications great for learning new concepts.

The application initially started out small and was used in my [AngularJS in 60-ish Minutes](http://weblogs.asp.net/dwahlin/archive/2013/04/12/video-tutorial-angularjs-fundamentals-in-60-ish-minutes.aspx) video on YouTube but has gradually had more and more features added to it and will continue to be enhanced over time. It’s used in a new “end-to-end” instructor-led training course my company wrote for AngularJS as well as in some video courses that will be coming out. Here’s a quick look at what the application home page looks like:  
  
  

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_2.png)  
  

In this post I’m going to provide an overview about how the application is organized, back-end options that are available, and some of the features it demonstrates. I’ve already written about some of the features so if you’re interested check out the following posts:

- [Building an AngularJS Modal Service](http://weblogs.asp.net/dwahlin/archive/2013/09/18/building-an-angularjs-modal-service.aspx)
- [Building a Custom AngularJS Unique Value Directive](http://weblogs.asp.net/dwahlin/archive/2013/09/24/building-a-custom-angularjs-unique-value-directive.aspx)
- [Using an AngularJS Factory to Interact with a RESTful Service](http://weblogs.asp.net/dwahlin/archive/2013/08/16/using-an-angularjs-factory-to-interact-with-a-restful-service.aspx)

Two versions of the application are available on Github including a “standard” version that uses out-of-the-box AngularJS features and a custom version that provides custom routing and dynamic loading of controller scripts:  
  
CustomerManagerStandard:  [https://github.com/DanWahlin/CustomerManagerStandard](https://github.com/DanWahlin/CustomerManagerStandard &quot;https://github.com/DanWahlin/CustomerManagerStandard&quot;)

CustomerManager with Custom Routing: [https://github.com/DanWahlin/CustomerManager](https://github.com/DanWahlin/CustomerManager &quot;https://github.com/DanWahlin/CustomerManagerStandard&quot;)

# Key Features

  
The Customer Manager application certainly doesn’t cover every feature provided by AngularJS but does provide insight into several key areas. Here are a few of the features it demonstrates with information about the files to look at if you want more details:

1. Using factories and services as re-useable data services (see the **app/customersApp/services** folder)
2. Creating custom directives (see the **app/customersApp/directives** and **app/wc.directives/directives** folder)
3. Custom paging (see **app/views/customers/customers.html** and **app/customersApp/controllers/customers/customersController.js**)
4. Custom filters (see **app/customersApp/filters**)
5. Showing custom modal dialogs with a re-useable service (see **app/customersApp/services/modalService.js**)
6. Making Ajax calls using a factory (see **app/customersApp/services/customersService.js**)
7. Using Breeze to retrieve and work with data (see **app/customersApp/services/customersBreezeService.js**). Switch the application to use the Breeze factory by opening **app/customersApp/config.js** and changing the **useBreeze** property to **true**.
8. Intercepting HTTP requests to display a custom overlay during Ajax calls (see **app/wc.directives/directives/wcOverlay.js**)
9. Custom animations using the GreenSock library (see **app/customersApp/animations/listAnimations.js**)
10. Creating custom AngularJS animations using CSS (see **Content/customersApp/animations.css**)
11. JavaScript patterns for defining controllers, services/factories, directives, filters, and more (see any JavaScript file in the app folder)
12. Card View and List View display of data (see **app/customersApp/views/customers/customers.html** and **app/customersApp/controllers/customers/customersController.js**)
13. Using AngularJS validation functionality (see **app/customersApp/views/customerEdit.html**, **app/customersApp/controllers/customerEditController.js**, and **app/customersApp/directives/wcUnique.js**)
14. Nesting controllers
15. More…

# Application Structure

The structure of the application is shown to the right. The  homepage is **index.html** and is located at the root of the application folder. It defines where application views will be loaded using the **ng-view** directive and includes script references to AngularJS, AngularJS routing and animation scripts, plus a few others located in the **Scripts** folder and to custom application scripts located in the **app** folder.

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_10.png)

The **app** folder contains all of the key scripts used in the application. There are several techniques that can be used for organizing script files but after experimenting with several of them I decided that I prefer content organized by **module name** (**customersApp** and **wc.directives** are examples of module folder names). Within each module folder I follow a convention such as **controllers**, **views**, **services**, etc. Individual features are identified within a convention folder by using additional subfolders such as **customers** and **orders**. Doing that helps me find things a lot faster due to mixing the convention/feature approach.

I’m a huge believer in having some conventions in place especially when it comes to team development. Having managed several development teams over the years I learned that consistency across projects is good since people come and go on teams and taking that approach allows files to be categorized and located easily (such as controllers and services). If you’re new to an app (a new hire, production support, a contractor, etc.) and are given a pure feature-based folder structure to work with it can be challenging to find things if you don’t know the app features well since whoever created the folder structure did it based on their way of thinking about the app. If some convention is mixed in with the features it becomes much easier to find things in my opinion and it makes it easier to understand multiple projects – not just one. On the other hand, going with a pure convention-based approach causes challenges with large applications since a **controllers** folder could have a ton of files in it which is why I like to segregate things by module/convention/feature.

There are **MANY** different opinions on this so my recommendation is to go with whatever works best for you. I’m definitely not saying this is “the way”…this is my way. Anyone who says, “You’re doing it wrong!” should be ignored because in my experience these are generally the type of close-minded people you run into who aren’t willing to take time to consider alternatives to their approach. Contrary to what some people think, there is no “one right way” to organize scripts and files. As long as the scripts make it down to the client properly (you’ll likely minify and concatenate them anyway to reduce bandwidth and minimize HTTP calls so the structure is irrelevant to the browser), the way you organize them is completely up to you. Here’s what I ended up doing for this application:

1. Animation code for some custom animations is located in the **app/customersApp/animations** folder. In addition to AngularJS animations (which are defined using CSS in Content/animations.css), it also animates the initial customer data load using a 3rd party script called [GreenSock](http://www.greensock.com/).
2. Controllers are located in the **app/customersApp/controllers** folder. Some of the controllers are placed in subfolders based upon the their feature/functionality while others are placed at the root of the **controllers** folder since they’re more generic:
    
      
    [](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_6.png)
    
3. The **directives** folder contains the custom directives created for the application. Directives that can be used across projects are placed in the **wc.directives/directives** folder which represents the module/convention approach. 
4. The **filters** folder (**app/customersApp/filters**) contains the custom filters created for the application that filter city/state and product information.
5. The **partials** folder contains partial views. This includes things like modal dialogs used in the application.
6. The **services** folder contains AngularJS factories and services used for various purposes in the application. Most of the scripts in this folder provide data/Ajax functionality. Two types of services exist to send and retrieve data to/from a RESTful service. The application uses $http by default but can be switched to use [BreezeJS](http://www.breezejs.com/) (an alternative way to work with data) by updating the config.js file.
7. The **views** folder contains the different views used in the application. Like the controllers folder, the views are organized into subfolders based on their functionality:

&gt; [](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_12.png)

# Back-End Technologies

  
The Customer Manager application ([grab it from Github](https://github.com/DanWahlin/CustomerManagerStandard)) provides two different options on the back-end including **ASP.NET Web API** and **Node.js** so you&apos;ll want to select one of them in order to run the application. The ASP.NET Web API back-end uses C# and Entity Framework for data access and stores data in SQL Server (LocalDb). The other option on the back-end is Node.js, Express, and MongoDB.

## Using the ASP.NET Web API Back-End Option

To run the application using ASP.NET Web API/SQL Server back-end open the .sln file at the root of the project in Visual Studio 2012 or higher (the free [Visual Studio 2013 Community Edition](http://www.visualstudio.com/en-us/products/visual-studio-community-vs) version is fine). Press **F5** and a browser will automatically launch and display the application. Under the covers, Entity Framework code first is used to create the database dynamically.

## Using the Node.js Back-End Option

To run the application using the Node.js/MongoDB back-end follow the steps listed in the [readme](https://github.com/DanWahlin/CustomerManagerStandard) on the Github site.

# Front-End Technologies

  
The Customer Manager application uses the following frameworks and libraries on the front-end:

1. [AngularJS](http://angularjs.org) (with the ngRoute and ngAnimation modules)
2. [Bootstrap](http://getbootstrap.com/)
3. [Angular UI BootStrap](http://angular-ui.github.io/bootstrap/)
4. [GreenSock Animations](http://www.greensock.com/)
5. [BreezeJS](http://breezejs.com) (optional)

# Optional

The application uses native AngularJS $http by default to make calls back to the server. However, by going to **app/customersApp/services/config.js** you can switch from using $http to using [BreezeJS](http://breezejs.com) (a very cool client-side data management library). When using BreezeJS you’ll also want to include [Breeze Angular Service](http://www.breezejs.com/documentation/breeze-angular-service)  (the script is already loaded in **index.html** to keep things simple). For more details on what BreezeJS is all about check out [my previous post](http://weblogs.asp.net/dwahlin/archive/2013/03/27/getting-started-managing-client-side-data-with-the-breeze-javascript-library.aspx).

1. [BreezeJS](http://www.breezejs.com/) 
2. [Breeze Angular Service](http://www.breezejs.com/documentation/breeze-angular-service)

# Application Views

  
The application has several views designed to cover different aspects of AngularJS from editing data to displaying, paging, and filtering data. Here are the main views:

## Customers View

This view provides multiple views of customer data (Card View and List View), supports paging, allows customers to be added or removed, and provides filtering functionality.

### **Card View** (app/customersApp/views/customers/customers.html)

This view displays customer information and allows customers to be edited (by clicking their name), deleted (by clicking the X), or their orders to be viewed.

  
[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_14.png)

### **List View** (app/customersApp/views/customers/customers.html)

  
This view displays customer information in a standard list type of view.

  
 [](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_16.png)

## **Login View** (app/customersApp/views/login.html)

  
This view allows the user to login. Security isn’t officially checked on the server for this demo as it just returns a boolean **true** value but the client-side does have security functionality built-in to show how that could be integrated with AngularJS, how events can be broadcast and handled, and more. Keep in mind that in a “real” application every secured resource on the server would have to be checked for the proper security credentials regardless of what data or information the client has.

  
[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_18.png)

  
 

## **Customer Edit View** (app/customersApp/views/customers/customerEdit.html)

  
This view adds some custom AngularJS validation including a custom directive (wcUnique.js) that ensures that the email address being added is unique.

  
[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_20.png)  
  
  

## **Customer Orders View** (app/customersApp/views/customers/customerOrders.html)

  
This view shows the orders for a specific customer. Orders can be sorted by clicking on the column headings.

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_22.png)

  

## **Orders View** (app/customersApp/views/orders/orders.html)

This view shows orders for all customers and supports paging, filtering, and sorting of the orders.

[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_24.png)  
  

  

## **About View** (app/customersApp/views/about.html)

  
There isn’t much to this view but I listed it for the sake of completeness.

  
[](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_26.png)

# Custom Directives

  
Three custom directives are currently included in the application:

1. Unique value directive (**app/customersApp/directives/wcUnique.js**) – This directive ensures that email addresses entered on the customer edit view are unique. It makes a call back to a service and then calls ngModel.$setValidity() to handle showing or hiding an error message. A post on this directive [can be read here](http://weblogs.asp.net/dwahlin/archive/2013/09/24/building-a-custom-angularjs-unique-value-directive.aspx).
    
    &gt; [](https://mscblogs.blob.core.windows.net/media/dwahlin/Media/image_612DB8F0.png)
    
2. Angular Overlay directive (**app/wc.directives/directives/wcOverlay.js**) – This directive intercepts XmlHttpRequest calls and displays a custom overlay (tracks AngularJS calls as well as jQuery). The directive is available in the application or as a [stand-alone directive on Github](https://github.com/DanWahlin/AngularOverlay).
    
    &gt; ![AngularOverlay Directive Example](/images/blog/learning-angularjs-by-example-the-customer-manager-application/appExample.webp)
    
3. Menu highlighter directive (**app/wc.directives/menuHighlighter.js**). This directive is responsible for highlighting menu items as a user clicks on them.  
      
      
    [](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_28.png)

#   
Custom Filters

  
The application includes two custom filters used to filter data in the customers and orders views:

1. Name/City/State Filter (app/customersApp/filters/nameCityStateFilter.js)
    
    &gt; [](https://mscblogs.blob.core.windows.net/media/dwahlin/Windows-Live-Writer/Learning-AngularJS-by-Example--The-Custo_9F6C/image_30.png)
    
2. Name/Product Filter (app/customersApp/filters/nameProductFilter.js)  
      
      

# Conclusion

I’ll be enhancing the application even more over time and welcome contributions as well. Tony Quinn contributed the initial Node.js/MongoDB code (thanks Tony!) which is very cool to have as a back-end option and several other contributions have been made for testing and the initial version of the menuHighlighter directive. Access the [standard application here](https://github.com/DanWahlin/CustomerManagerStandard) and a version that has [custom routing in it here](https://github.com/DanWahlin/CustomerManager). Additional information about the custom routing can be found [in this post](http://weblogs.asp.net/dwahlin/archive/2013/05/22/dynamically-loading-controllers-and-views-with-angularjs-and-requirejs.aspx).

**Onsite Developer Training:** If your company is interested in onsite training on JavaScript, ES6, AngularJS, Node.js, C# or other technologies please email [training@thewahlingroup.com](mailto:training@thewahlingroup.com) for details about the classes that we offer.</content:encoded></item></channel></rss>