As I work towards being a data scientist, I am learning and improving my Python skill set.
As I go through my journey to write programs and gain a better understanding Algorithms, I will be learning some cool new skills. I’ll start by setting up my development environment and gathering resources.
I am excited to continue to work on a Python project I started and learning additional skills to build modules that make a program successful.
One of the main goals is write a program that is built like a story, has a main idea, and solves a problem.
I will be documenting my progress and writing interesting code. You will find some interesting and useful code and associated programs.
We will review problem resolution techniques and help desk management. The goal of problem resolution management is to reduce the impact and frequency of issues / incidents that caused impactful problems.
By reducing problems we will improve the customer experience. To improve problem resolution, we will identify root cause, implement work arounds, and develop best practices.
When dealing with incidents and problems, it is important to know different tactics. When dealing with incidents, you want to solve and mediate quickly. Incidents are typically high impact such as security breaches. These incidents may disrupt critical services until resolved.
A range of problems may attribute to incidents. Problems should be reviewed for root cause and solution. A good solution should be implemented to prevent future occurrence. Case details should include problems, steps taken, and quickest resolution.
The third phase is error control and problem resolution. We document fixes so we can quickly fix problems and incidents. This documentation should improve problem resolution time. We also want to determine the best work around for popular problems and incidents. Implementing permanent fixes will reduce the number of incidentsl and the risk to the organization.
The following are problem resolution phases. Phase 1 problem determination or root cause. In this phase we look at problem trends and analysis. We talk to teams and determine if there are any hot button problems or issues. We look for areas we can improve the overall customer experience. We look at creating a road map to chart improvements. This road map looks at key services and products we deliver and associated improvements.
The second phase is problem control. We look at areas we want to concentrate on and dedicate energy and resources. We determine which key products and services may have problems or issues to address. We determine which potential issues or problems may have the highest customer impact and address accordingly. We will put together a plan to address know issues, fixes, and improve the most plaguing problems.
In problem resolution management, we will define the following roles: problem director – Responsible for improving problem resolution. Develop matrix for continuous improvement for budgeting and management approval. Approve and manage projects to improve the environment and lower overall IT costs. Utilize project management skills to implement solutions to minimize problems and incidents.
The next role in problem management is problem manager. Leads problem resolution team and and priorities investigation, proposes changes to director, and tracks trends in the environment. Uses performance metrics tied to remediation of pressing issues and provides coaching to the team. Helps manage and document risk. If problems need to be escalated, this process gets done.
The next role is problem coordinator. This role helps gather and organize documentation. This role can lead low impact investigations. Assist in reporting and following up on tasks assigned to teams and experts. This role will also help manage major incidents, hand over meetings, manage knowledge base, known errors, and work arounds.
How to set up and manage key metrics. The key metric in problem management is improving the customer experience. To improve the customer experience we will work to lower problem incidents, reduce recurring incidents, reduce cost, and improve efficiency.
A key metric is efficacy, what has project management done for us lately. Host resolution meeting after major incidents. For success we manage key incident events. Emphasize changes that improve the customer experience.
A good questioning method involves 5W2H, what symptoms, where the rlocation, when reported, how often, who, how much impact,
To evaluate success we look at return on investment. We will look at the benefits delivered to the organization and customer versus overall cost. Some of the cost we will review are: time, energy, parts, and labor. This should provide ROI.
To fully appreciate the technical environment we must know all of the key management and executive players. To effectively manage the group, it is important to know what each group brings to the table. We will make goals to ensure great customer experience.
To ensure a great customer experience, we will focus on improving event management. Event management ensures systems, tools, and functionality are tested and monitored. Any time we have an event that disrupts systems or customer service, the incident team springs into action. The team uses logs, device monitoring, and statically analysis to pin point potentials system failure. The team should work proactively correcting problems before the become bigger issues.
Once the incident is resolved, the problem team will do some investigative work. We work to prevent future occurance.
When working to improve services we include subject matter experts (SME). Knowing which group or team SMEs belong to can be helpful. They can help fact gathering, and determine root causes. They help implement fixes and work arounds.
To improve customer service and problem resolution, we will incorporate data analysis. We will perform regular tracking and analysis of problems. We will use data to get out in front of issue before they become bigger problems. Some of the key metrics we will look at are customer feed back, system uptime, and quality of system performance.
A good question to ask is, what can we do to improve the user experience. The Preto principle is a good guide for customer service interaction snd problem resolution measures. 80 % of issues come from 20% of the operational environment.
To break down the 20 % of the problems into manageable data, we can ask the following questions: which products and services are generating the most problematic noise? Which hardware platform is causing the most issues? What cases or problems represent the most trouble tickets? Which present the highest cost for the organization.
Investigate problems via determining biggest contributors to users pain, head aches, cost, and complaints. Some of the tools we will utilize are Excel, Power BI, Tableau, and MATLAB.
One of the major initiatives of problem resolution is to reduce rework. Search for duplicate issues and work towards permanent solution. Ask team members and engineers what problems they have to continually solve! Preventing and minimizing repeat issues will save time and money.
When Data analysis is limited, we can audit bridge or resolution/incident wrap up calls. Take note of good and bad behavior. Observe the flow, cadence, speakers, dead air, and progress over time. We can try graphing the process in Excel (time vs process) step chart to visual the information. After the audit is completed, we will review trends and the the appropriate actions taken.
Improve problem resolution skill set. Learn a data analysis language like R or Python. Improve skill set utilizing SQL to retrieve and organize and group data. One of the end games is to improve ability to quickly and efficiently solve and close issues.
To improve efficiency we will select high value targets to concentrate on. We want to prove the value of problem solving method. Make the team members feel valuable. We will prioritize using a data driven model.
To improve the problem solving process, we will review metric requirements. Typically major incidents should be investigated within 48 hours. This requirement can result in some bad outcomes. Often the results are high number of cases, stress, fake root causes, cases closed but not solved in 48 hour window.
Problem prioritization varies across business units. Some of the variables we should consider. An often over looked metric is the probability of determining root cause based on teams skill setzand data available. The probability of funding, scheduling and completing project to correct root cause. Determine if project is is worth undertaking base on cost benefit analysis. Finally we calculate ROI % = probability of root cause X probability of completion X (benefit – cost / cost) X 100
To successfully utilize data, we need to accurately perform cost analysis. Become a subject matter expert or ask someone to break down steps of project. Calculate work packages, determine how many work hours each work packages will take. We can work with the controller to determine average over head cost and/or all in labor cost. If we have to estimate labor cost, we will use $75 / hour. We will multiply labor cost by estimated hours to get a job cost.
We will use the value formula and enter in all details for projects considered. We will use Excel to organize and manage the data. Once this is completed, we will sort results into high value targets. We will use this as a to do list. Additionally, we will present the top three valued targets and let management prioritize.
The end result will provide high value results to the company that will significantly improve the customer experience. To document the teams efforts, we will write a case study to show the results.
The next step in problem resolution is, cause analysis. We can tell a story based on the events that occurred. We will document the facts, determine chain of events, record variable and circumstances, and make recommendations for improvements. The end result will be, recommend and implement action to reduce or eliminate the reoccurrence of issues and problems.
To document and resolve issues,we will create accurate problems statements. We will provide a detailed description of what caused the problem. The description should contain single main object and deviation that caused the problem. The details should be focused, factual, and evidence based.
To investigate the problem, we will use the five whys. A five why statement consists of why was the system down, power outage, hard drive failure, effect, power line damaged, UPS failure, and data corruption. The 5 ways to help determine why the problem occurred.
We can stop problem resolution when the information source no longer can provide reasons for problem or point of failure. Additionally, we can stop gathering information when we can recommend or execute solution to prevent reoccurrence of problem or point of failure. We don’t always have to uncover every detail that cause the problem to recommend a solution.
To get a complete picture of the problem, we will review contributing factors. These factors provide context for our decision making. They will help reduce the impact of the problem.
When reviewing cause analysis and end results, we will present finding utilizing visual aides. visual aides such as graphs can assist in comparing post incident results to standards. One of the goals is to keep stockholders on track and focused on next steps.
A key aspect of problem resolution is document incident, recommend appropriate action, and sell the best solutions to the executive team. We can try utilizing flow chart based method = KT incident mapping.
To minimize the impact an has incident, we’ll implement work arounds. The focus of a work around is to get some functionality back for affected user. At times we will encounter know errors. These issues will be put on the back burner to address later.
When addressing temporary items, it should be noted that these items are often forgotten. The trouble ticketing system should have a record of problem, reason, type, follow up, and details of work around. Maintenance and review of tickets should ensure temporary work arounds are addressed and solutions found.
The benefits of maintaining a good knowledge base includes: know error and work arounds found easily, engineers do not waste time working on know issues, and problem can be solved quickly.
Customer can be given good update on time frame and solution. Automation potential solutions can improve the customer experience.
To improve the level of service, we can look at the following: the lowest level – reactive, customer initiated, engineers do not have solution. Good – reactive, customer initiated, engineers have solution. Great – proactive, automated or manually initiated, customer forwarded about the problem.
To improve customer service, we will prioritize know issues and work around management. We will address customer concerns and communicate key information to defuse concerns and anger.
By improving known issues and work around management. We can improve customer satisfaction scores and recommendations based on improved services.
Best practice knowledge management – Create and enforce policies to ensure facts are entered in knowledge management system.
Customer support agents should be emphatic, supportive, proactive. and professional.
To improve services we document problem systems, problem triggers, impact, and next steps to prevent reoccurrence. To improve efficiency, we will link some known issue to problem investigation. We will develop a process to implement two way accounting for issues.
To effectively manage problem database, we will update case status to closed when issue has been resolved. Additionally, we will delete known issues when the remediation and resolution had been completed. After closing and deleting know issues produce a report that shows current number, category, and state of each error
To review status and impact of know errors, we will create impact statement. We will perform an analysis of the cost of known error and the benefit derived from solving these issues.
One of the keys to good problem management is, get the most value out of our remediation efforts. Be aware that some known issue are not worth solving. Realize that not everything in the environment is going to be perfect. If the impact and occurrence rate are low, the business can live ewith the risk.
To improve service, we will review tools in the trouble ticket knowledge management database to link know errors with resolution documentation to improve performance.
To continue to move forward, we need to develop permanent solutions to problems. We should not allow temporary solutions to become long term. We can develop a process to track how long temporary fixes have been put in place. Some of the key statistics to review are: how often does the error occur, cause issue, or disruption to service. Consider a permanent solution.
To improve services, we will report on status , technical cost, and conversion to permanent solution. These results will be reported to the management team. We as a team need to make time for remediation process. We need to work on minimizing major incidents and make the environment stable. To help manage this process, we will come up with a prioritize list of issues to work on.
When we prioritize resolution work for work arounds, we will use the following variables: probability of funding, success, cost of solution, labor, parts, down time, Benefits of the project and solution, such as saving time, money, and gaining new customers. Additionally, we should qualify the value of the solution. This will imclude: cost savings in labor and time, improved customer service, and process improvement. after finding a solution to work arounds. we will perform a ROI analysis. The ROI calculation is: probability of funding x probability of completion x (benefits – costs) / (costs) x 100
We will strive to improve the customer experience by prioritizing known error and work arounds by ROI. We work to reduce incidents. The end result of our efforts will be improved customer satisfaction. Additionally, this should lead to increased sales.
When working on problem resolution, we should always clarify issue to ensure accuracy. We should also verify the actual problem exists by checking situation with tools and facts. Additionally, we should ask detailed questions to get additional facts about the problem.
After verifying the facts and situation, we can send out notification about the incident and get other and resources needed involved.
When we are investigating issues we must write a problem statement. We will develop requirement that problem statements are accurate and provide a good description of the problem. A good problem statement can improve average resolve time by 18% – 24%. A good problem statement can improve time to resolution.
A problem statement should consist of two elements. A specific object and specific deviation. A good approach is to focus on one team and get 100% compliance. To verify compliance, we will audit the problem knowledge database. We will review 3 months of closed tickets to determine if key elements exist. We will review case titles and determine if each case Passes or fails specific object and deviation.
A pass requires a single specific object and deviation. Fail – any more than single specific object deviation or any other element. Additionally we will calculate average time to solve between pass and fail cases. We will update management team on results and push for mandatory problems statement requirement and provide training. The end result will be improved problem resolution times and customer service.
One big decision we are going to make is what data to collect. We will work to get all the pertain facts. A good tip is to separate fact from opinion.
As we document the process and problems, we will make information available to team members. We will strive to keep all information updated with current status. Our goal is to make all information easy to understand and follow.
The next step in the process is identify and test possible causes. We will use 5W2H to gather and organize facts. Based on the information we will assemble a team with appropriate skills. Once the team is assembled, we will start generating ideas to solve the problem. The first step is review the case information. Durning the information review, we will present slides and graphs to highlight key information. We will encourage team members to suggest ideas for the solutions. The ideas for the suggestions should be fully developed with specific details.
The next step is to evaluate the ideas. We will focus on evaluating one idea at a time. We will ask questions and review scenarios an assumptions. Eliminate ideas that do not explain the facts. To get a good correlation between facts and assumed solutions, we can use 8D, A3 or Kepner Trevor. We may want to eliminate ideas that do not have supporting facts and assumptions. Document facts that could not be explained and then move on.
-Review risks and benefits, what needs to be improved or what issues need to be addressed and minimized.
To select the best idea and solution we will utilize Occam’s Razor. The best solution will require the least assumptions. We will create list of actions required to verify cause of issue.
The next step in the process is decision making and determining what actions will be taken. Some of the criteria used will be requirements, and options. We will develop a frame to work through the decision making process.
First identify issue in context. Second perform risk benefit analysis. 3rd identify and analyze options. Select a strategy and make a decision. Next we will implement strategy. Finally we will monitor and evaluate results.
– Identify issues and goals of solution. State and specify solution.
-Analyze available options, perform research, identify top 10 list. Write down all of the options. Document whether these options meet criteria two.
-Selecting a strategy. Which option gives us the most benefit.
– Implement the strategy. Purpose the strategy to stake holders. Make recommendation and share documentation and performance. Point out benefits, risks, and values.
– monitor and evaluate results. Monitor progress during implementation. Communication Important actions to stake holders.
When completed, evaluate and document lessons learned and communicate how well the team did.
We will strive to improve daily decision making by utilizing the 5 principles: what is the goal of the decision, what are the benefits needed, what options do we need, do options meet our needs, what is our best option.
To improve services wen will review risk management. We will implement corrective and preventative action process (CAPA). We will implement proactive measures. We will implement containment measures. Determine future actions to be taken.
To improve services, some steps to consider are: preventative action, properly assigning action items to team members. Implement containment measures, Note solutions to difficult problems to improve solution team. Document solution and various damage control measures.
To improve services, we implement corrective action. To determine what actions to implement to correct and prevent issue. Make sure key actions get scheduled and completed. Report important improvements to management.
We will implement containment measures. Note solutions that can solve bad problems. Implementing actions that can reduce overall damage.
The last phase involves correction. What can I do to prevent problem from occurring again? Recommend corrections and decide on which one to implement. We will schedule all important actions and ensure they get completed.
We will improve change management to ensure corrections get implemented by deadlines.
To improve operations we will manage action items and tasks effectively. Problem tasks are work units to help solve a problem as you work toward a goal. We will keep tasks small and manageable. The task to be completed should be clear and concise. We will describe time, cost, and performance, restraints, and requirements.
Some of the challenges of problem tacking is: lack of follow up, back log grows and becomes stale, trouble tickets and current issues may reduce focus and time available to solve issue.
We will strive to improve task completion. We will report on task status and completion percentage. Prioritize, maintain, and reduce outstanding tasks. We will prioritize tasks based on impact to the user. Any repeat problems will be addressed first. We can calculate the cost of allowing tickets to stay open by determine the number of hours spent on working on ticket. We can estimate the amount of hours required to close ticket and then request the needed resources.
To improve service, we will utilize problem clarification tools. We should completely understand the problem, accurately describe current situation, and work to reduce reoccurrence rate. To provide further clarification, we will utilize the 5 whys. This helps us understand the cause and effect correlation. The goal is to throughly understand issue and how to reduce any reoccurrence. If after reviewing the 5 whys and we don’t have a root cause, we will launch an investigation.
One important factor in problem solving is prioritization. We will use prioritization tools to improve the performance of the team. We want to prioritize urgent task first and then important items.
To improve decision making, we will utilize the Eisenhower matrix. If the task is considered important and urgent I’ll do. If the item is considered important but not urgent, I need to make a decision. If the task is considered urgent but not important, I will delegate. If the item is considered not urgent or important, I’ll delete.
We can also use a technique relative prioritization. First we brain storm a list of tasks. Then we mark the most important with a ten. For the rest, we get a relative number of importance. It the task is 1/2 has important as a ten, it gets 5 and so forth. We will complete tasks in this order.
The next technique we’ll utilize is pain value analysis. This approach focuses on formula to determine most important tasks. We will review historical data to determine priority. One way to perform analysis is to export data to excel and configure report. Our pain formula will include down time, users impacted, severity, loss of income, etc.
A formula example: pain = users impacted x outage minutes x cost of down time per minutes . This will help us focus on the most critical tasks.
Another technique we can use to improve problem management is the Pareto analysis. The basis is 80% of problems come from 20% under lying issues. To evaluate, we will create bar chart with x axis metrics and y axis contributors. Sort data by largest to smallest. Identify 80% metric and focus on those contributors.
To continue to improve services we will review possible cause identification tools. Possible cause or root cause will help determine what caused the issue. There can be multiple issues until cause is narrowed down. Each of the possible solutions must be tested against the facts.
The first step in the process is brain storming. During the brain storming session, people contribute ideas. These ideas are weighted equally. We will focus on generating as many ideas as possible without passing judgement. We will encourage the team to build on ideas that are recommended.
A technique that can help is Kepler-Trego distinction and changes. We will focus on facts is and is – not. What is different or unique of is vs is-not situation. We will list what is unique and the various distinctions. Each distinction is checked for changes. These changes become potential causes of issues and problems.
A technique that we will utilize is Fishbone / Ishikawa. These technique brain storms by categories. Categories can include machines, methods, materials. Environment, measurement, and personnel.
List Measurements, materials, methods, and processes to help determine problem statements. Each section has possible causes.
Firewall problem determination: measurements: log incorrect, monitoring failure.
Environment-load not balances, traffic to intense, and traffic type to heavy.
Material: Lan cable and port is defective, board defective.
People -more user on system, uncontrolled changes, vending poor training
Method and processes – con fig has reset, config not optimal, installation incorrect.
Machines and equipment – hardware failure, poor maintenance, power outage,
Another technique is 6 hat thinking. We will look at the situation from a different perspectives. Each hat is used at different stages to discuss the issues. Hat color – Blue – big picture, process, and planning. Red – emotions reaction and feelings. Yellow – optimistic and and positive affects. White – neutral information and data, Black – critique, risk, and challenges. Green creative, alternatives, and solutions
Cause mapping and avoidance: visually understand how issues occurred helps prevent reoccurrence. We can utilize tools to help map out problem stories and organize and investigate. This will help determine actions that can be implemented alleviate and rectify the situation.
Another technique is Apollo. We will visually map out cause effects, and known relationships. We will focus on implementing effective solutions, prevent reoccurrence, not cause secondary issues, meeting organizational goals and issues. Solutions are designed to causal chain and prevent root cause from recurring.
Another technique is Taproot. Visually map out cause and effect factors. We will document circumstances behind incident. We will develop list of corrective actions. This technique is useful for accident and quality investigations.
Failure mode affect analysis (FMEA) This mode works to address potential failure. The definition is any way a error or defect could affect the customer. Used to prevent failures. Any way how things could fail and the consequences.
During analysis, we document consequences of each failure. High risk, high consequences are prioritized and addressed.
The last step involves investigation frameworks. Framework provides end to end process to accomplish goal. Each framework has its own strength and weaknesses. A good understanding of frameworks can assist on you in solving difficult problems and reach goals.
A3 framework is considered a continuous improvement process. We plan – opportunity for improvement for change. Do – test change, conduct study. Check- review the test, analyze results, what was learned. Act – take action from what was learned.
Another technique is Kepher – Tregoe – systematic technique and process to think critically. Situation appraisal – Prioritize and manage teams. Problem analysis – gather data and determine root cause. Decision analysis – make formal decision or recommendation. Potential problem analysis – avoid risk.
Another technique is 9D which is used on the automotive industry. 9 step methodology to identify, correct, and eliminate reoccurring problems. We should include steps to create COR teams and congratulate teams for a job well done.
The last technique is Six Sigma. This is a continuous improvement process. A DMAIC framework is defined, measure, analyze, and Controlled.
Some additional ways to improve skills. We can improve by studying process improvement. Some process improvements are Six Sigma, project management, data analysis, SQL scripting, Power-shell, and Python. Improve critical thinking skills with Kepner-Trege. Attend annual conferences such Service Now. Develop and updating skill sets is a continuous process improvement.
We have began the process of laying the foundation for our Hybrid cloud environment on Azure. We have created an Azure subscription for production, development operations, and testing.
The process of migrating mission critical services to Azure cloud is imperative. We have designed, built and deployed virtual resources and machines for our Windows infrastructure.
We will configure all the necessary resources for the Azure virtual network. Azure virtual resource includes: network setting on the virtual machines, such as: Azure virtual networking, public and private IP addressing, subnetting, and firewall configuration.
After configuring the private and public IP for the VM, we will set up virtual network appliance (VNET). In the process of configuring VNET, we determine the appropriate configuration such as: VNET to VNET connectivity. The VNET configuration will connect remote subsets and resources together.
To connect the remote resource. we will create a new virtual gateway to subnet. The VNET actually connects the remote resources together. Once the connections are created, we will assign public and private keys to verify a secure econnection.
One technique to connect remote resources includes creating peer to peer VNETs. We will deploy a VNET gateway to connect remote resources. We will deploy gateway and connections to allow or deny network traffic. The appropriate connections should be associated with the same subscription to function properly.
We will review the process to set up Domain Name Service. We can set up DNS using Azure DNS servers. This configuration supports Azure private zones. Azure private zones allow additional security. Another option is, we can use our internal Windows or Linux DNS servers. This gives us more options to manage our on premise VMs and resources.
Azure provided DNS has several advantages. Such as, no additional configuration needed. The service is ready to go once deployed. Fully qualified DNS names are not required. This provides some simplification of DNS services. Azure is highly available as to reduce any down time. High availability includes redundant backup DNS servers.
Azure provided DNS has some disadvantages. The DNS suffix cannot be changed. WINNS and Netbios are not supported. This must be taken into consideration when deploying Azure DNS servers. Probably not the best solution for internal hybrid environment.
When you implement internal DNS, scavaging service should be turned off. We will configure Azure DNS to facilitate improved name resolution on premise.
For hybrid environments we will implement our own DNS servers with in our domain. This will allow us to connect our Azure virtual machines to our internal on premise servers. This will also allow us to connect Azure virtual machine to multiple networks. This configuration will allow remote, standard, and reverse look up of IP addresses.
To configure Azure DNS we will create a DNS zones. We will assign the zones to the appropriate subscription and subnets. We configure and name the DNS zone based on the domain name and standard naming conventions.
One the DNS zones are created, we will be assigned DNS servers to delegate. In the DNS zone we can get the DNS server information (IP address) for delegation purposes. Typically the domain name is what was purchased from the web register. Example Contso.com
The next step is adding DNS records to our zone. The 1st record we will add is www which is an A record type. We will leave the TTL set to 1 hour. We can set up C Names records for aliases. We can set up MX records for our email server and any additional services needed.
Since we are setting up DNS for our web server, we will use its physical IP address. Once the A record is created we will test connective by using NSLookup command to find DNS names. The NSLookup command should return the name and IP address of the web server. To create a private DNS zone, you must use Powershell as opposed to the GUI.
To complete the configuration of the network we will setup network security groups. A network security group is comprised of: a list of rules that allow or denies traffic. This applies to virtual machines in subnet, and network interface connected to virtual machine. The rules can be applied to inbound or outbound traffic.
The network security group (NSG) work flow, we can use is traffic is sent to Azure VNET. NSG rules are processed. The VNET determines if Inbound traffic is allowed or denied.
When a virtual machine is provisioned, default security rules are created. By default, inbound VNET traffic is allowed. Inbound traffic to load balances is allowed by default as well The last default rule denies all inbound traffic.
Outbound default rules include: allow outbound VNET traffic, allow outbound web traffic, last rule is to deny outbound traffic.
When establishing security rules they should include source and source port range. We also include destination and destination port range. You can allow all traffic by using an asteric or source port any. You must specify what protocols is to be used. We also need to specify action, allow or deny traffic. additionally we have set a priority to the rule. Rules are processes based on priority. The lowest priority is processed first and the highest last.
A scenario we will deploy is a Smalll network with two subnets. The VNET will deny all traffic except RDP traffic. To accomplish this we will deny all traffic to VNET and associated the two subnets. We will test this scenario by trying to RDP to Virtual machines. (VMS)
To update security rules, we will create a network security group. The NSG has default inbound and outbound security rules established. The NSG is associated a subscription and resource. To create with a security role. We will select inbound or outbound. We will create an inbound NSG rule for RDP. In order for NSG to go into effect, it must be associated with subnet of the VNET. We want to test a deny RDP rule. We’ll select a subnets and associate VM. We will choose both subnet and network interface. To view changes and topology, we can utilize network watcher. We can verify the network and subnet are properly associated, and will route traffic accordingly. Any traffic bound for this network and subnet, are subject to rules with the NSG. We will now associate the Virtual machines network interface. We will edit there security group associated with the network interface. By default the security is the VM itself. Once complete the network interface should be associated with correct security zone. These changes must be done through the network interface due to system constraints.
Many of these tasks can be complete through Powershell. As we complete these tasks. The fist steps is to assign variable such as name, description. Once the NSG is created, we will assign to the appropriate subnet. One of the main commands is get-AZNET. We will create inbound rule to allow access to a web server. The last step is to complete an associating VM with the appropriate subnet. If you ever need to delete a NSG, you must first disassociate it from the subnet.
The next step is to add a rule to NSG to allow access. We’ll select inbound security rules and add. Select a source such as any, IP address or application security group. Then select port address range. Next we’ll specify destination such as NSG, IP address, application security group. Next select port address range. To allow RDP use port 3389. We’ll specify action allow or deny traffic. Then we’ll add priority. This has to have a lower priority then the 500 block all traffic. We’ll give the rule a name. Once the rule is created, we’ll test RDP.
When the network starts to become more complex with multiple NSG, it is important to evaluate effectiveness of your security rules. To help evaluate NSG and rules we will use Network watcher. We will review the effective security rules. We will select the subscription, resource, and the VM. The rules for the resource will be presented. This will include NSG, inbound and outbound rules. Within this configuration we setup NSG for RDP and one for access to web server.
To determine how security rules are affecting a specific VM, go to topology and select the VM. Within the VM select networking. This will show the specific inbound and outbound rules. Review NSG will help determine what traffic is allowed to subnet and then to network interface for the VM.
Within the NSG, we are allowing HTTP to port 80 to the VMs subnet. Local to the VM, the network interface is blocking all inbound traffic. Using effective rules will allow us the manage traffic to our subnets and VMs.
As we deploy a wide range of solutions, we can help improve services, operations, and security. Please contact us for more information!
Computer networks build the foundation of the internet. While reviewing network operations and protocols, we will also-review Microsoft exam network fundamentals 98-366.
In the early days of the internet, people connected via a dial up modem. Typically the speed was 28 KB / second. The connection was over POTS, plain old telephone system. You may remember using AOL dial up and the you got mail message, wow. In the 2000, 70% of the people used dial up.
The next advancement in technology was digital subscriber line. DSL is asynchronous and transmits both voice and data.
The typical Home network consists of broad band and a modem. Broadband uses Data over cable service interface specifications DOCSIS. High bandwidth transmission standard over broadband. DOCSIS supports high bandwidth transfers (1GB) via data modulation techniques. Broadband is a shared medium and will show slow downs during peak usage hours (8 – 12 pm).
ISDN Integrated Service dedicated network or leased lines provide means to connect remotes offices. This technology uses Pharrell digital transmission. This medium supports video transmission at 64 kB / channel.
The two interfaces used are basic rate (hone use) and primary rate. Basic rate has 2 B channels at 64 KB and 1 D channel 16 KB
Primary rate was designed for business. 23 Chanel’s at 64 KB and one D channel at 64 KB. The circuits are T1 circuits. T1 provides internet connection between remote sites, voice connectivity for PBX over leases lines. T1 provides 23 voice channels.
A more affordable option for voice over IP is SIP session intilization protocol trunking.
T1 24 channels 1.5 MB
T2 96 channels 6.3 MB
T3 672 channels 44.76 MB
E standard 32 channels 2 MB
MPLS multiprotocol label switching private routed connection uses label switch routing – route tables to labels. Packets are forwarded based on labels. Label switching results in redundancy and resiliency.
Customers internal network connect to MPLS via virtual routing. MPLS is a layer 3. The on premise network will connect via OSP Open shortest path or BGP border gateway protocol. The dynamic routing protocol will allow companies to easily add new locations. The new routes are added dynamically.
VPLS connectivity virtual private label switching is layer 2 bridging. The edge can be a switch often provided by service provider. VPLS is a cost effective means to connect multiple sites.
VPN and tunnels are cost effective way to connect two remote resources. VPN creates a secure tunnel. Tunnels are typically site to site. A VPN encapsulates the data and securely sends across the network. Once the data arrives it is de-encapsulated.
We can setup unencrypted tunnels which utilizes generic routing encapsulation GRE The protocols used include 47, TCP, UDP, and multicast. This protocol works well with OSPF.
Wireless technologies include fixed based wireless provide by internet service provider. These solution is cost effective and speed may fluctuate. Another option is satellite wireless. This can be utilized access internet and transport data. (Remote location) This service tends to be expensive. Satellite is low bandwidth and high latency. This not good for Voice over IP service.
Wireless services also include 3G and 4G. With a good cell connection, throughout can be 3 – 4 MB per second. These plans can be quite expensive. These services are good for data backups.
Based on this information, we can help plan, design, and integrate your network. The end result will be improved service and overall performance. Please contact us for more information. Thanks
We provide extensive management of passwords and cloud identities. A cloud identity is an object stored in a Active directory database. This contains object attributes. We will Manage then Environment with group policy.
We will setup and manage password policies.
We will utilize PowerShell to automate processing of batch jobs and repetitive tasks.
Add bulk users
Updating bulk user passwords
Manage users licences
We can help improve you Office 365 deployment and experience.
• Registers – points to memory locations that contain next set of instructions to execute.
• Arithmetic logic unit (A.L.U) does the actual execution of instructions.
• Control unit manages and synchronizes system while application code and Operating System instructions are executed.
• General Register – hold variables and temporary results.
• Program status word holds conditional bit, should CPU be working in user mode (problem state) or privileged mode (kernel / supervisor mode).
• To access data, CPU sends fetch request on address bus.
Random Access Memory – temporary storage facility where data & program instructions can be temporarily held and altered. Volatile means that loss of power results in loss of data.
Hardware segmentation – memory is separated physically instead of just logically. This help protect higher level process’s memory space.
Cache memory- high speed writing and reading activities.
Motherboards have different types of cache.
• Level 1 – fastest
• Level 2 – 2nd fastest
• Level 3 – 3rd fastest
L1 & L2 is typically built into controllers and processors.
Having a great understand of all the facets that go into computer Architecture allows you to get the best performing system while providing excellent security.
My goal is to provide excellent information on computer hardware: Personal computers, servers, network, security devices, and mobile devices. Getting the best devices @ the best price is the goal of JBrock Consulting. Shop and see our products at https://jbrock-consulting.azurewebsites.net/shop/
Please subscribe to get the latest information on products, pricing, and features.
Everyone uses email and being more productive can enhance your career. In today’s work environment, email is a mission critical application.
Outlook is a great communication tool. You can load Outlook on your PC, Mac, or mobile device. Here are some of the key task you can do with Outlook:
Manage appointments using calendar features.
Share files via the cloud such as One drive application.
Stay productive and connected any where in the world.
Organize email to focus on key messages.
Use @mentions to get someones attention
How to add @mentions – In the body of the email, add the @ symbol and the first few letters on users name. Outlook will offer a list of contacts to added. This will get the readers attention and probably a response.
Managing your calendar and contacts in Outlook
When scheduling meeting and appointments use the calendar assistant. The calendar scheduling assistant allows you to see when attendees and rooms are available. The bars in the times field will indicated when attendees are busy or free. The rooms tab on the right well let you know when rooms are available. This will make scheduling meeting pain free.
How to collaborate using Outlook
Outlook allows users to share a file attachment so you can collaborate on data files with others. In Outlook, select attach file for email message. Files with a cloud icon are stored in the cloud, such as OneDrive application . This allows multiple uses to make changes to file, enhancing collaboration.
How to setup and online meeting with notes
To setup an online meeting, in Outlook select Skype meeting and choose date and time. Note, you have to be logged into Skype to setup the online meeting. This inserts a link that attendees can use to join/access meeting.
To setup up meeting notes, select meeting notes on Outlook ribbon bar. This allow you to select an OneNote notebook to document minutes for your meeting.
Outlook is an amazing productivity tool. For additional useful tips, please subscribe. We will provide great productivity tips for our valued readers. Thank you and much appreciated.
Training and career development are a crucial component to improving yourself and becoming more successful. During my studies, I have completed and extensive review of available training platforms. I have some very useful and valuable information.
I started reviewing some additional training sponsored by Google through Coursera. The first program was Google IT support professional certificate. Coursera gives users a 7 day free trail with full access to every course in your specialization. I enrolled in the IT support professional certificate specialization and I liked the class very much.
The IT support class was a combination of video lectures, exercises, and module quizzes. I found the material was interesting and informative. The program covered some very interesting material: digital logic, computer architecture, operating systems, networking, software, troubleshooting, and customer care. For some of the hands on exercises, we used Google cloud services to spin using up servers and associated services. I really liked using Google cloud services. The monthly cost for the program was $49 per month. I completed the course and received the following course certificate.
The next program I worked on was System Administration and Information Technology Infrastructure Services. Since this was they type of work I have done for most of my career, I was very interested in this topic. The work I do as a system administration, and the topics covered were: cloud services, server maintenance, infrastructure services, hardware provisioning, system maintenance, virtualization, remote access, SSH, Network services, Software services, File and Print services, Platform services, Directory Services, and Data recovery & Back Ups. The monthly cost for this program was $49 per month. The course material keep me engaged and working hard to complete each module. I felt the cost was well worth the price. I completed the course and received the following course certificate.
I really enjoyed the first two class, so I continue on my Information Technology specialist track. The next class in the program was computer networking. Designing and building networks is a passion on mine, so learning more about networks was exciting. The class covered the following topics: TCP/IP 5 layer network model, OSI network model, Networking devices, Network Setup, physical layer, Data link layer, Network layer, Sub-netting, Routing, Transport layer, Firewalls, Application layer, Network services, Virtual Private Networks, Wide area network, wireless, Dynamic Name service, Cloud networks, and troubleshooting. The class was great and well worth the price of admission! I completed the course and received the following course certificate.
Conclusion: If you are looking to continue your education and improve your skill set, I recommend Coursera programs. The classes are designed help you stay motivated and on track. For each module there is a dead line, but this can be extended if you need additional time. Feel free to leave any comments on your experience with Coursera. If you need any assistance please contact me. Thanks
Note: I will be covering some these topics in more detail in some future posts.
We went out on the town and had dinner at the 110 Grill in Braintree Ma. There was a Good size crowd, and nice atmosphere. The Bruins are on TV, game 2 of the NHL playoffs. For Bruins fans, it was kind of a must win. I ordered an Arnold Palmer, in honor of the masters golf tournament and the king of golf Anrnold Parmer. Pretty tasty!
We were feeling good and the service was excellent, and the waitress is cute! Not a bad start to the evening.
We got a table pretty quick and we had a nice view of the Bruins game.
We order an appetizer Chorizo Totchos, the food came out pretty fast and it looked pretty good!
We received our appetizer which looks good. The only down side was, no silverware to eat it. We got some from a passing waiter and off we went. The Appetizer “Chorizo Totchos” is very good. The main ingredient, (potatoes) was cooked to perfection. The taste was delicious with chives and sour cream. There was a hint of chili sauce and salsa. I would say a great appetizer, definitely very pleased! The portion of the appetizer was great, plenty of meat and potato’s to get you started! So far I am impressed!
For my entrée, I got Shrimp & Clam linguini. The sauce (butter & Lemmon) was hot and tasty, The Shrimp and clams where cooked well and the meal felt healthy and some what light. The toasted bread was a nice touch and I enjoyed my meal while watching the Bruins. An added bonus was the Bruins were leading the game 2 (2 – 0).
The burger and onion ring got a 10 out of 10 stars. (**********). The Claim & Shrimp linguini got an 8.5 out of 10 start. (*********). The meal did not blow me away, but it was really good. I would recommend 110 grill as a place to put into your dinning rotation! We had a good time and really liked the 110 grill!