US20110047617A1 - Protecting against network resources associated with undesirable activities - Google Patents

Protecting against network resources associated with undesirable activities Download PDF

Info

Publication number
US20110047617A1
US20110047617A1 US12/939,735 US93973510A US2011047617A1 US 20110047617 A1 US20110047617 A1 US 20110047617A1 US 93973510 A US93973510 A US 93973510A US 2011047617 A1 US2011047617 A1 US 2011047617A1
Authority
US
United States
Prior art keywords
resource
identified
network resource
safe
recited
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/939,735
Inventor
Aaron H. Averbuch
Manav Mishra
Roberto A. Franco
Tariq Sharif
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US12/939,735 priority Critical patent/US20110047617A1/en
Publication of US20110047617A1 publication Critical patent/US20110047617A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2119Authenticating web pages, e.g. with suspicious links

Definitions

  • Online communication can allow these scammers to reach many people easily through the use of such things as e-mail, instant messaging, or rogue web pages.
  • a user is misled into navigating to a fraudulent link that the user believes is trustworthy.
  • the user may be subject to attempts to elicit private information from the user.
  • a user might type “bankoamerica.com” in an address box in an attempt to link to a Bank of America website. Once the user navigates to what appears to be, but is not, a legitimate Bank of America website, the user might inadvertently divulge private information upon request and thus be “phished”.
  • Another way in which a user can be “phished” is by responding to an email that appears to the user to be legitimate.
  • the user may be involved in an online transaction (such as an eBay auction) and receive an email which requests that the user click a link and enter personal information in that regard.
  • an online transaction such as an eBay auction
  • undesirable activities can include such things as unknowingly receiving spyware or malware.
  • a method detects and responds to a user-initiated activity on a computing device. Responding can include, by way of example and not limitation, checking locally, on the computing device, whether a web resource that is associated with the user-initiated activity has been identified as being associated with a safe site. After checking locally, some embodiments present the user with a notification that the web resource is not associated with a safe site. The user is then given an option to check remotely or to continue with the user-initiated activity without checking remotely. Furthermore, in at least some embodiments, if the web resource is not identified as being associated with a safe site, the method checks remotely, away from the computing device, whether the web resource is identified as being at least possibly associated with one or more undesirable activities.
  • FIG. 1 illustrates an example operating environment in accordance with one or more embodiments.
  • FIG. 2 is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 3 continues from FIG. 2 and is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 4 is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 5 continues from FIG. 4 and is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 6 is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 7 continues from FIG. 6 and is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 8 illustrates a notification icon and list box presented to a user in accordance with one or more embodiments.
  • FIG. 9 illustrates a dialog box presented to a user in accordance with one or more embodiments.
  • FIG. 10 illustrates a dialog box presented to a user in accordance with one or more embodiments.
  • a method detects and responds to a user-initiated activity on a computing device. Responding can include, by way of example and not limitation, checking locally, on the computing device, whether a web resource that is associated with the user-initiated activity has been identified as being associated with a safe site. After checking locally, some embodiments present the user with a notification that the web resource is not associated with a safe site. The user is then given an option to check remotely or to continue with the user-initiated activity without checking remotely. Furthermore, in at least some embodiments, if the web resource is not identified as being associated with a safe site, the method checks remotely, away from the computing device, whether the web resource is identified as being at least possibly associated with one or more undesirable activities.
  • FIG. 1 illustrates an exemplary system, generally at 100 , in which various embodiments described below can be implemented in accordance with one embodiment. These various embodiments can protect against web resources that are determined or suspected of being associated with one or more undesirable activities.
  • system 100 includes a client 102 in the form of a computing device, a server 104 that is remote from the computing device, and a network 106 through which client 102 and server 104 can communicate.
  • client 102 can comprise any suitable computing device, such as a general purpose computer, handheld computer, and the like.
  • network 106 comprises the Internet.
  • client 102 embodies one or more software applications 108 through which client 102 and server 104 can communicate.
  • Software application(s) 108 typically reside in the form of computer-readable instructions that reside on some type of computer-readable medium.
  • any suitable application can be used, in the embodiments described in this document, an application in the form of a web browser is used. It is to be appreciated and understood, however, that other types of applications can be used without departing from the spirit and scope of the claimed subject matter. For example, applications such as word processing applications, email applications, spreadsheet applications, and the like can utilize various techniques described in this document.
  • Computer readable media can be any available medium or media that can be accessed by a computing device.
  • Computer readable media may comprise “computer-readable storage media”.
  • Computer-readable storage media include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
  • Computer-readable storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
  • FIGS. 2 and 3 are flow diagrams that describe a method in accordance with one embodiment.
  • the method can be implemented in connection with any suitable hardware, software, firmware or combination thereof.
  • the method is implemented in software in the form of computer-executable instructions, such as those defining an application that executes on a client computing device.
  • Step 200 detects a user-initiated activity on a client computing device.
  • Any suitable application can be used to detect the user-initiated activity.
  • an application in the form of a web browser is used to detect a user-initiated activity in the form of a navigation associated with a web resource.
  • any suitable manner of initiating the navigation can be utilized. For example, in some embodiments, navigation can be initiated by a user clicking on a particular link that the user finds on a web page. Alternately or additionally, the navigation can be initiated by a user typing a URL in an appropriate address box that comprises part of a web page that they are browsing.
  • step 202 checks locally, on the client computing device, to ascertain whether a web resource that is associated with the user-initiated activity is identified as being associated with a safe site.
  • This step of checking locally on the client computing device can occur contemporaneously with the user-initiated activity. For example, conducting such a check can occur contemporaneously with conducting a navigation associated with a third-party web site.
  • the local device can maintain a list of sites that have been determined as safe. For example, the microsoft.com® site might appear on such a list and be considered a safe site. More generally, a safe site can be considered as one that is not associated with activities that are considered to be undesirable. One type of undesirable activity is phishing, although other undesirable activities can be the subject of the check without departing from the spirit and scope of the claimed subject matter. These other activities can include, by way of example and not limitation, activities associated with exposing the user to malware or spyware.
  • step 202 can be performed in any suitable way.
  • a Uniform Resource Locator (URL) associated with a user-initiated navigation can be compared to a local list of URLs which are known to be safe.
  • URL Uniform Resource Locator
  • step 204 If a match occurs (the “yes” branch from step 204 ), the URL associated with the navigation is identified as being associated with a safe site and step 206 allows the user to continue with the user-initiated activity.
  • step 208 checks remotely from the computing device to ascertain whether the web resource is identified as at least possibly being associated with one or more undesirable activities.
  • the step of checking remotely from the computing device can also occur contemporaneously with the user-initiated activity. For example, during the remote check, a user-initiated navigation to a third party site can be allowed to continue to provide a smoothly-perceived user experience.
  • FIGS. 2 and 3 illustrate this step as being performed remotely from the client computing device, this is not to be construed as meaning that one or more portions of this step, as described below, cannot be performed on the local client computing device.
  • one or more remote servers can be provided with information associated with a particular web resource, such as a link or web site.
  • This information can come from a third party service that is designed to look for and keep track of sites that are or become affiliated with undesirable activities such as phishing and the like.
  • this information might be utilized to develop what is referred to as reputation information which can then be used as part of a score-based system to rank the web resource, as described below.
  • the reputation information can be provided to the local computing device which can then compute a local score associated with the web resource. The reputation information and the local score can then be processed to derive a reputation score that is associated with the web resource.
  • the web resource can be ranked in categories such as: a web resource known to be associated with one or more undesirable activities, a web resource suspected of being associated with one or more undesirable activities, or a web resource that is not known or suspected of being associated with one or more undesirable activities.
  • Step 210 determines whether the web resource is identified as at least possibly being associated with one or more undesirable activities. This can be accomplished in any suitable way. For example, here this can be accomplished by utilizing the web resource's derived reputation score, as noted above. Furthermore, this step can be performed completely remotely from the client computing device.
  • step 212 provides a notification to this effect and step 214 ( FIG. 3 ) notifies the user of this information.
  • step 214 FIG. 3
  • the user might only be presented with an alert and/or a dialog box when the web resource has been identified as being suspected or actually being associated with undesirable activities. For example, in a score-based system, if the web resource is ranked in an appropriate category that suggests an undesirable association, then the user might be notified.
  • a similar notification can be provided to the user at step 212 .
  • Step 216 gives or provides the user with an option to continue the user-initiated activity. Typically this step is performed in the event that the web resource is identified as being associated with an undesirable activity, although it is illustrated slightly differently here.
  • techniques discussed herein can be implemented in the context of policing against phishing activities. By detecting a user-initiated activity and checking to ascertain whether an associated web resource is associated with phishing, the user can be protected from attempts by scammers or other bad actors to gain illegal or unauthorized access to private information.
  • FIGS. 4 and 5 illustrate a method, in accordance with one embodiment, of protecting against phishing activities.
  • the method can be implemented in connection with any suitable hardware, software, firmware or combination thereof.
  • the method is implemented in software in the form of computer-executable instructions, such as those defining an application that executes on a client computing device.
  • Step 400 detects a user-initiated activity on a client computing device.
  • Any suitable application can be used to detect the user initiated activity.
  • an application in the form of a web browser is used to detect a user-initiated activity in the form of an attempted navigation associated with a web resource.
  • step 402 checks locally on the client computing device to determine whether a web resource that is associated with the user-initiated activity is identified as being associated with a safe site.
  • This step of checking, locally on the client computing device, can occur contemporaneously with the user-initiated activity.
  • a safe site can be any site that is not associated with phishing activities.
  • the local check that is performed can be performed in the same or similar manner as described above.
  • Step 404 determines whether the web resource that is associated with the user-initiated activity is identified as being associated with a safe site. If it is, then step 406 allows the user to continue with the user-initiated activity.
  • step 408 checks remotely from the computing device, whether the web resource is identified as at least possibly being associated with a phishing activity.
  • the remote check that is performed can be performed in the same or similar manner as described above.
  • Step 410 determines whether the web resource is identified as at least possibly being associated with a phishing activity. This can be accomplished by utilizing the web resource's derived reputation score, as noted above.
  • Step 412 provides a notification whether the web resource is identified as at least being associated with a phishing activity and step 414 ( FIG. 5 ) notifies the user of this information.
  • This can be performed in any suitable way. For example, the user might only be presented with an alert and/or dialog box when the web resource is ranked in one or more of the categories discussed above. Alternately, the user might always be presented with an alert and/or dialog box.
  • Step 416 gives or provides the user with an option to continue the user-initiated activity. Typically this step is performed in the event that the web resource is identified as being associated with a phishing activity, although it is illustrated slightly differently here.
  • steps 412 - 414 can be implemented, including the user interfaces that can be employed, is illustrated and discussed below in regards to FIGS. 9-10 .
  • checking occurs remotely from the user's computing device. Doing so, however, can cause privacy concerns for some users. For example, if a user wants to navigate to a certain webpage, the URL of the web page can be sent to a remote server to verify the absence of any undesirable activities, such as phishing. Certain users may be uncomfortable with the notion of allowing a remote server to see certain web pages that the user frequents. Thus, some users may find it desirable to have the option of determining whether or not the remote check takes place.
  • FIGS. 6 and 7 are flow diagrams that describe a method in accordance with one embodiment with the aforementioned privacy concerns in mind.
  • the method can be implemented in connection with any suitable hardware, software, firmware or combination thereof.
  • the method is implemented in software in the form of computer-executable instructions, such as those defining an application that executes on a client computing device.
  • Step 600 detects a user-initiated activity on a computing device.
  • a user-initiated activity takes place when the user clicks on a link associated with a web resource.
  • a link might be present as part of a web page, an email document, or some other document on which a user might be working.
  • Other examples of user-initiated activities are given above.
  • the web resource can be checked locally as discussed above and as illustrated by step 602 .
  • Step 604 determines whether the web resource is identified as being associated with a safe site. If it is, then step 610 allows the user to continue with the user-initiated activity. Checking locally poses no security risks because all of the information is already contained on the user's computing device.
  • the local check reveals that the web resource is not identified as being associated with a safe site (e.g., not contained in the local list of safe sites), the user can be notified as follows.
  • Step 606 presents a user with a notification that enables the user to opt to have a web resource checked to ascertain whether the web resource is associated with one or more undesirable activities.
  • This notification effectively alerts the user that the web resource is not on the local list of safe sites and asks the user whether he or she would like to check remotely from the computing device to determine whether the web resource associated with, for example an attempted navigation, is associated with any undesirable activities. Examples of undesirable activities were given above.
  • step 610 allows the user to continue with their activity.
  • step 612 conducts the remote check by sending a request to an appropriate server or other remote device.
  • Step 614 determines whether the web resource is associated with any undesirable activities. This step can be performed in any suitable way, examples of which are provided above.
  • Step 616 provides a notification to the user with regard to the remote check that was performed.
  • Step 618 receives this notification from the remote server and presents the notification to the user.
  • the notification can either tell the user whether or not the web resource is associated with any undesirable activities, or provide information that can further be used to make that decision, as described above.
  • step 620 can provide the user with an option to continue with the activity despite the association with undesirable activities.
  • the above methodology can be implemented in any suitable way using any suitable technology. As but one example of how the above-described techniques can be implemented from the perspective of the user, consider, FIGS. 8-10 .
  • a notification icon such as that shown at 800 in FIG. 8 can appear when a user-initiated activity is detected.
  • This icon may appear in the toolbar of a web browser for the purpose of alerting the user that web resource to which he wishes to navigate is not on the local list of safe sites.
  • a list box can be presented to the user.
  • One such list is shown at 802 . The list gives the user the ability or option to check the website, turn on automatic checking, report the website, or change phishing filter settings.
  • the website will be checked remotely from the user's computing device as described above. If the user selects “turn on automatic checking” the website will be checked remotely from the user's computing device, and the next time that a user-initiated activity is detected and the web resource is not on the local list of safe sites, the remote check will automatically occur without notifying the user.
  • FIG. 9 illustrates a dialog box that is presented to a user when a website that the user has attempted to navigate to has been determined to be associated with a phishing activity. There, the user is notified that the website is a reported phishing website and is given the option of either continuing to the website or of closing the web page.
  • FIG. 10 illustrates a dialog box that is presented to a user when a website that the user has attempted to navigate to is determined to not be associated with a phishing activity. There, the user is notified that the website is not a suspicious or reported website and the user can click “OK” to continue.
  • Various embodiments provide protection against web resources associated with one or more undesirable activities. In this manner, a user and/or the user's computing device can be protected from activities that could prove harmful.

Abstract

Various embodiments provide protection against web resources associated with one or more undesirable activities. In at least some embodiments, a method detects and responds to a user-initiated activity on a computing device. Responding can include, by way of example and not limitation, checking locally, on the computing device, whether a web resource that is associated with the user-initiated activity has been identified as being associated with a safe site. Furthermore, in at least some embodiments, the method checks remotely, away from the computing device, whether the web resource is identified as being at least possibly associated with one or more undesirable activities.

Description

    RELATED APPLICATIONS
  • This application is a continuation of and claims priority to U.S. application Ser. No. 11/272,473, filed on Nov. 10, 2005, and entitled “Dynamically protecting against web resources associated with undesirable activities,” the disclosure of which is hereby incorporated by reference herein in its entirety.
  • BACKGROUND
  • Many threats have emerged regarding online communications. Often, these threats involve web resources that can be associated with undesirable activities that can somehow impact a user and/or the user's computing device. Undesirable activities can come in many shapes and sizes. For example, phishing, where scammers or other bad actors attempt to gain illegal or unauthorized access to private information, is one example of such a threat.
  • Online communication can allow these scammers to reach many people easily through the use of such things as e-mail, instant messaging, or rogue web pages. Often, a user is misled into navigating to a fraudulent link that the user believes is trustworthy. As a consequence, the user may be subject to attempts to elicit private information from the user. For example, a user might type “bankoamerica.com” in an address box in an attempt to link to a Bank of America website. Once the user navigates to what appears to be, but is not, a legitimate Bank of America website, the user might inadvertently divulge private information upon request and thus be “phished”.
  • Another way in which a user can be “phished” is by responding to an email that appears to the user to be legitimate. For example, the user may be involved in an online transaction (such as an eBay auction) and receive an email which requests that the user click a link and enter personal information in that regard.
  • Other examples of undesirable activities can include such things as unknowingly receiving spyware or malware.
  • SUMMARY
  • Various embodiments can protect a user against web resources associated with one or more undesirable activities. In at least some embodiments, a method detects and responds to a user-initiated activity on a computing device. Responding can include, by way of example and not limitation, checking locally, on the computing device, whether a web resource that is associated with the user-initiated activity has been identified as being associated with a safe site. After checking locally, some embodiments present the user with a notification that the web resource is not associated with a safe site. The user is then given an option to check remotely or to continue with the user-initiated activity without checking remotely. Furthermore, in at least some embodiments, if the web resource is not identified as being associated with a safe site, the method checks remotely, away from the computing device, whether the web resource is identified as being at least possibly associated with one or more undesirable activities.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an example operating environment in accordance with one or more embodiments.
  • FIG. 2 is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 3 continues from FIG. 2 and is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 4 is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 5 continues from FIG. 4 and is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 6 is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 7 continues from FIG. 6 and is a flow diagram that describes steps in a method in accordance with one or more embodiments.
  • FIG. 8 illustrates a notification icon and list box presented to a user in accordance with one or more embodiments.
  • FIG. 9 illustrates a dialog box presented to a user in accordance with one or more embodiments.
  • FIG. 10 illustrates a dialog box presented to a user in accordance with one or more embodiments.
  • DETAILED DESCRIPTION
  • Various embodiments can protect a user against web resources associated with one or more undesirable activities. In at least some embodiments, a method detects and responds to a user-initiated activity on a computing device. Responding can include, by way of example and not limitation, checking locally, on the computing device, whether a web resource that is associated with the user-initiated activity has been identified as being associated with a safe site. After checking locally, some embodiments present the user with a notification that the web resource is not associated with a safe site. The user is then given an option to check remotely or to continue with the user-initiated activity without checking remotely. Furthermore, in at least some embodiments, if the web resource is not identified as being associated with a safe site, the method checks remotely, away from the computing device, whether the web resource is identified as being at least possibly associated with one or more undesirable activities.
  • Example Implementation
  • FIG. 1 illustrates an exemplary system, generally at 100, in which various embodiments described below can be implemented in accordance with one embodiment. These various embodiments can protect against web resources that are determined or suspected of being associated with one or more undesirable activities.
  • There, system 100 includes a client 102 in the form of a computing device, a server 104 that is remote from the computing device, and a network 106 through which client 102 and server 104 can communicate. Client 102 can comprise any suitable computing device, such as a general purpose computer, handheld computer, and the like. In one embodiment, network 106 comprises the Internet.
  • In this example, client 102 embodies one or more software applications 108 through which client 102 and server 104 can communicate. Software application(s) 108 typically reside in the form of computer-readable instructions that reside on some type of computer-readable medium. Although any suitable application can be used, in the embodiments described in this document, an application in the form of a web browser is used. It is to be appreciated and understood, however, that other types of applications can be used without departing from the spirit and scope of the claimed subject matter. For example, applications such as word processing applications, email applications, spreadsheet applications, and the like can utilize various techniques described in this document.
  • Various techniques may be described herein in the general context of software or program modules. Generally, software includes routines, programs, objects, components, data structures, and so forth that perform particular tasks or implement particular abstract data types. An implementation of these modules and techniques may be stored on or transmitted across some form of computer readable media. Computer readable media can be any available medium or media that can be accessed by a computing device. By way of example, and not limitation, computer readable media may comprise “computer-readable storage media”.
  • “Computer-readable storage media” include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer-readable storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
  • FIGS. 2 and 3 are flow diagrams that describe a method in accordance with one embodiment. The method can be implemented in connection with any suitable hardware, software, firmware or combination thereof. In one embodiment, the method is implemented in software in the form of computer-executable instructions, such as those defining an application that executes on a client computing device.
  • Step 200 detects a user-initiated activity on a client computing device. Any suitable application can be used to detect the user-initiated activity. For example, in one embodiment, an application in the form of a web browser is used to detect a user-initiated activity in the form of a navigation associated with a web resource. In addition, any suitable manner of initiating the navigation can be utilized. For example, in some embodiments, navigation can be initiated by a user clicking on a particular link that the user finds on a web page. Alternately or additionally, the navigation can be initiated by a user typing a URL in an appropriate address box that comprises part of a web page that they are browsing.
  • Responsive to detecting the user-initiated activity, step 202 checks locally, on the client computing device, to ascertain whether a web resource that is associated with the user-initiated activity is identified as being associated with a safe site. This step of checking locally on the client computing device can occur contemporaneously with the user-initiated activity. For example, conducting such a check can occur contemporaneously with conducting a navigation associated with a third-party web site.
  • In some embodiments, the local device can maintain a list of sites that have been determined as safe. For example, the microsoft.com® site might appear on such a list and be considered a safe site. More generally, a safe site can be considered as one that is not associated with activities that are considered to be undesirable. One type of undesirable activity is phishing, although other undesirable activities can be the subject of the check without departing from the spirit and scope of the claimed subject matter. These other activities can include, by way of example and not limitation, activities associated with exposing the user to malware or spyware.
  • In conducting the local check, step 202 can be performed in any suitable way. By way of example and not limitation, a Uniform Resource Locator (URL) associated with a user-initiated navigation can be compared to a local list of URLs which are known to be safe.
  • If a match occurs (the “yes” branch from step 204), the URL associated with the navigation is identified as being associated with a safe site and step 206 allows the user to continue with the user-initiated activity.
  • If, on the other hand, the web resource is not identified as being associated with a safe site (i.e. the “no” branch from step 204), then step 208 checks remotely from the computing device to ascertain whether the web resource is identified as at least possibly being associated with one or more undesirable activities.
  • The step of checking remotely from the computing device can also occur contemporaneously with the user-initiated activity. For example, during the remote check, a user-initiated navigation to a third party site can be allowed to continue to provide a smoothly-perceived user experience.
  • The remote check can be performed in any suitable way. While FIGS. 2 and 3 illustrate this step as being performed remotely from the client computing device, this is not to be construed as meaning that one or more portions of this step, as described below, cannot be performed on the local client computing device.
  • As an example, consider the following. In at least some embodiments, one or more remote servers can be provided with information associated with a particular web resource, such as a link or web site. This information can come from a third party service that is designed to look for and keep track of sites that are or become affiliated with undesirable activities such as phishing and the like. In some instances, this information might be utilized to develop what is referred to as reputation information which can then be used as part of a score-based system to rank the web resource, as described below. More specifically, the reputation information can be provided to the local computing device which can then compute a local score associated with the web resource. The reputation information and the local score can then be processed to derive a reputation score that is associated with the web resource. Utilizing one or more of these scores, the web resource can be ranked in categories such as: a web resource known to be associated with one or more undesirable activities, a web resource suspected of being associated with one or more undesirable activities, or a web resource that is not known or suspected of being associated with one or more undesirable activities.
  • Step 210 determines whether the web resource is identified as at least possibly being associated with one or more undesirable activities. This can be accomplished in any suitable way. For example, here this can be accomplished by utilizing the web resource's derived reputation score, as noted above. Furthermore, this step can be performed completely remotely from the client computing device.
  • In the event that the web resource is identified as at least possibly being associated with one or more undesirable activities (i.e. the “yes” branch from step 210), step 212 provides a notification to this effect and step 214 (FIG. 3) notifies the user of this information. This can be performed in any suitable way. For example, the user might only be presented with an alert and/or a dialog box when the web resource has been identified as being suspected or actually being associated with undesirable activities. For example, in a score-based system, if the web resource is ranked in an appropriate category that suggests an undesirable association, then the user might be notified.
  • If the web resource is not identified as being associated with undesirable activities (i.e. the “no” branch from step 210), then a similar notification can be provided to the user at step 212.
  • Step 216 gives or provides the user with an option to continue the user-initiated activity. Typically this step is performed in the event that the web resource is identified as being associated with an undesirable activity, although it is illustrated slightly differently here.
  • Protecting Against Phishing Activities
  • As noted above, in at least some embodiments, techniques discussed herein can be implemented in the context of policing against phishing activities. By detecting a user-initiated activity and checking to ascertain whether an associated web resource is associated with phishing, the user can be protected from attempts by scammers or other bad actors to gain illegal or unauthorized access to private information.
  • As an example, consider FIGS. 4 and 5, which illustrate a method, in accordance with one embodiment, of protecting against phishing activities. The method can be implemented in connection with any suitable hardware, software, firmware or combination thereof. In one embodiment, the method is implemented in software in the form of computer-executable instructions, such as those defining an application that executes on a client computing device.
  • Step 400 detects a user-initiated activity on a client computing device. Any suitable application can be used to detect the user initiated activity. For example, in one embodiment, an application in the form of a web browser is used to detect a user-initiated activity in the form of an attempted navigation associated with a web resource.
  • Responsive to detecting the user-initiated activity, step 402 checks locally on the client computing device to determine whether a web resource that is associated with the user-initiated activity is identified as being associated with a safe site.
  • This step of checking, locally on the client computing device, can occur contemporaneously with the user-initiated activity. A safe site can be any site that is not associated with phishing activities. The local check that is performed can be performed in the same or similar manner as described above.
  • Step 404 determines whether the web resource that is associated with the user-initiated activity is identified as being associated with a safe site. If it is, then step 406 allows the user to continue with the user-initiated activity.
  • If, on the other hand, the web resource is not identified as being associated with a safe site, then step 408 checks remotely from the computing device, whether the web resource is identified as at least possibly being associated with a phishing activity. The remote check that is performed can be performed in the same or similar manner as described above.
  • Step 410 determines whether the web resource is identified as at least possibly being associated with a phishing activity. This can be accomplished by utilizing the web resource's derived reputation score, as noted above.
  • Step 412 provides a notification whether the web resource is identified as at least being associated with a phishing activity and step 414 (FIG. 5) notifies the user of this information. This can be performed in any suitable way. For example, the user might only be presented with an alert and/or dialog box when the web resource is ranked in one or more of the categories discussed above. Alternately, the user might always be presented with an alert and/or dialog box.
  • Step 416 gives or provides the user with an option to continue the user-initiated activity. Typically this step is performed in the event that the web resource is identified as being associated with a phishing activity, although it is illustrated slightly differently here.
  • One example of how steps 412-414 can be implemented, including the user interfaces that can be employed, is illustrated and discussed below in regards to FIGS. 9-10.
  • Providing a User with an Option to Check a Web Resource
  • As described above, in order to determine whether a web resource is associated with an undesirable activity, checking occurs remotely from the user's computing device. Doing so, however, can cause privacy concerns for some users. For example, if a user wants to navigate to a certain webpage, the URL of the web page can be sent to a remote server to verify the absence of any undesirable activities, such as phishing. Certain users may be uncomfortable with the notion of allowing a remote server to see certain web pages that the user frequents. Thus, some users may find it desirable to have the option of determining whether or not the remote check takes place.
  • FIGS. 6 and 7 are flow diagrams that describe a method in accordance with one embodiment with the aforementioned privacy concerns in mind. The method can be implemented in connection with any suitable hardware, software, firmware or combination thereof. In one embodiment, the method is implemented in software in the form of computer-executable instructions, such as those defining an application that executes on a client computing device.
  • Step 600 detects a user-initiated activity on a computing device. In but one embodiment, and as noted above, one such activity takes place when the user clicks on a link associated with a web resource. Such a link might be present as part of a web page, an email document, or some other document on which a user might be working. Other examples of user-initiated activities are given above.
  • After detecting a user-initiated activity, the web resource can be checked locally as discussed above and as illustrated by step 602. Step 604 then determines whether the web resource is identified as being associated with a safe site. If it is, then step 610 allows the user to continue with the user-initiated activity. Checking locally poses no security risks because all of the information is already contained on the user's computing device.
  • If however, the local check reveals that the web resource is not identified as being associated with a safe site (e.g., not contained in the local list of safe sites), the user can be notified as follows.
  • Step 606, presents a user with a notification that enables the user to opt to have a web resource checked to ascertain whether the web resource is associated with one or more undesirable activities. This notification effectively alerts the user that the web resource is not on the local list of safe sites and asks the user whether he or she would like to check remotely from the computing device to determine whether the web resource associated with, for example an attempted navigation, is associated with any undesirable activities. Examples of undesirable activities were given above.
  • If, at step 608, the user declines to check remotely, step 610 allows the user to continue with their activity. On the other hand, if the user opts to conduct the remote check, step 612 conducts the remote check by sending a request to an appropriate server or other remote device.
  • Step 614 determines whether the web resource is associated with any undesirable activities. This step can be performed in any suitable way, examples of which are provided above. Step 616 provides a notification to the user with regard to the remote check that was performed. Step 618 (FIG. 7) receives this notification from the remote server and presents the notification to the user.
  • The notification can either tell the user whether or not the web resource is associated with any undesirable activities, or provide information that can further be used to make that decision, as described above.
  • If the web resource is not associated with any undesirable activities, the user can continue with his or her activity. On the other hand, if the web resource is determined to be associated with undesirable activities, step 620 can provide the user with an option to continue with the activity despite the association with undesirable activities.
  • In Operation
  • The above methodology can be implemented in any suitable way using any suitable technology. As but one example of how the above-described techniques can be implemented from the perspective of the user, consider, FIGS. 8-10.
  • Specifically, if a particular user has chosen to be given the option of determining whether a remote check will occur, a notification icon, such as that shown at 800 in FIG. 8 can appear when a user-initiated activity is detected. This icon may appear in the toolbar of a web browser for the purpose of alerting the user that web resource to which he wishes to navigate is not on the local list of safe sites. When the user clicks on this icon, a list box can be presented to the user. One such list is shown at 802. The list gives the user the ability or option to check the website, turn on automatic checking, report the website, or change phishing filter settings.
  • If the user selects “check this website”, the website will be checked remotely from the user's computing device as described above. If the user selects “turn on automatic checking” the website will be checked remotely from the user's computing device, and the next time that a user-initiated activity is detected and the web resource is not on the local list of safe sites, the remote check will automatically occur without notifying the user.
  • FIG. 9 illustrates a dialog box that is presented to a user when a website that the user has attempted to navigate to has been determined to be associated with a phishing activity. There, the user is notified that the website is a reported phishing website and is given the option of either continuing to the website or of closing the web page.
  • FIG. 10 illustrates a dialog box that is presented to a user when a website that the user has attempted to navigate to is determined to not be associated with a phishing activity. There, the user is notified that the website is not a suspicious or reported website and the user can click “OK” to continue.
  • CONCLUSION
  • Various embodiments provide protection against web resources associated with one or more undesirable activities. In this manner, a user and/or the user's computing device can be protected from activities that could prove harmful.
  • Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A computer-implemented method comprising:
checking locally on a computing device to determine whether a website is identified as being associated with a safe site, wherein the safe site is a site that is not associated with an undesirable activity including one or more of a phishing activity, a malware activity, or a spyware activity; and
if the website is not identified locally on the computing device as being associated with a safe site, causing a request to be transmitted for receipt by a remote resource to determine if the website is identified by the remote resource as being associated with the undesirable activity.
2. The method as recited in claim 1, wherein the checking locally on the computing device to determine whether the website is identified as being associated with a safe site is responsive to an attempted navigation to the website via the computing device.
3. The method as recited in claim 1, wherein the checking locally on the computing device to determine whether the website is identified as being associated with a safe site comprises checking the website against a list of sites that are considered to be safe sites.
4. The method as recited in claim 1, wherein the checking locally on the computing device to determine whether the website is identified as being associated with a safe site occurs contemporaneously with a navigation to the website via the computing device.
5. The method as recited in claim 1, further comprising:
receiving an indication of whether or not the website is identified by the remote resource as being associated with the undesirable activity; and
causing a visual indicia of the indication to be displayed via the computing device.
6. The method as recited in claim 1, further comprising:
receiving an indication of whether or not the website is identified by the remote resource as being associated with the undesirable activity; and
providing an option to continue a navigation to the website.
7. The method as recited in claim 1, further comprising receiving an indication of whether or not the website is identified by the remote resource as being associated with the undesirable activity, the indication being based on a local score from the computing device and reputation information from the remote resource.
8. The method as recited in claim 1, further comprising, if the website is not identified locally on the computing device as being associated with a safe site, navigating to the website contemporaneously with causing the request to be transmitted for receipt by the remote resource.
9. A computer-implemented method comprising:
checking locally on a device to determine whether a network resource is identified as being associated with a safe resource, wherein the safe resource is a resource that is not associated with an undesirable activity including one or more of a phishing activity, a malware activity, or a spyware activity; and
if the network resource is not identified locally on the device as being associated with a safe resource:
causing a request to be transmitted for receipt by a remote resource to determine if the network resource is identified by the remote resource as being associated with the undesirable activity; and
receiving an indication of whether or not the network resource is associated with the undesirable activity.
10. The method as recited in claim 9, wherein the checking locally on the device to determine whether the network resource is identified as being associated with a safe resource is responsive to an attempted navigation to the network resource via the device.
11. The method as recited in claim 9, wherein the network resource comprises a web site, and wherein the checking locally on the device to determine whether the network resource is identified as being associated with a safe resource comprises using a web browser to check the web site.
12. The method as recited in claim 9, wherein the checking locally on the device to determine whether the network resource is identified as being associated with a safe resource comprises checking the network resource against a list of network resources that are considered to be safe network resources.
13. The method as recited in claim 12, wherein the list of network resources comprises uniform resource locators (URLs) for the network resources.
14. The method as recited in claim 9, wherein the indication of whether or not the network resource is associated with the undesirable activity is based on a reputation score for the network resource, the reputation score being calculated based on a local score from the device and reputation information from the remote resource.
15. The method as recited in claim 14, wherein the reputation score indicates that the network resource is associated with the undesirable activity, the method further comprising causing to be displayed a visual indication of the reputation score.
16. The method as recited in claim 9, wherein the checking locally on the device to determine whether the network resource is identified as being associated with a safe resource comprises calculating a reputation score for the network resource, the reputation score being calculated based on a local score from the device and reputation information from the remote resource.
17. The method as recited in claim 9, further comprising, if the network resource is not identified locally on the device as being associated with a safe resource, navigating to the network resource contemporaneously with causing the request to be transmitted for receipt by the remote resource.
18. The method as recited in claim 9, further comprising presenting an option to continue a navigation to the network resource responsive to receiving the indication of whether or not the network resource is associated with the undesirable activity.
19. A computer-implemented method comprising:
calculating a local score for a network resource;
receiving reputation information associated with the network resource from a remote resource; and
calculating a reputation score for the network resource using the local score and the reputation information, the reputation score indicating that:
the network resource is known to be associated with one or more undesirable activities;
the network resource is suspected of being associated with one or more undesirable activities; or
the network resource is not known or suspected of being associated with one or more undesirable activities.
20. The method as recited in claim 19, further comprising:
presenting via a computing device a visual indication of the reputation score; and
providing an option to navigate to the network resource via the computing device.
US12/939,735 2005-11-10 2010-11-04 Protecting against network resources associated with undesirable activities Abandoned US20110047617A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/939,735 US20110047617A1 (en) 2005-11-10 2010-11-04 Protecting against network resources associated with undesirable activities

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/272,473 US7831915B2 (en) 2005-11-10 2005-11-10 Dynamically protecting against web resources associated with undesirable activities
US12/939,735 US20110047617A1 (en) 2005-11-10 2010-11-04 Protecting against network resources associated with undesirable activities

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/272,473 Continuation US7831915B2 (en) 2005-11-10 2005-11-10 Dynamically protecting against web resources associated with undesirable activities

Publications (1)

Publication Number Publication Date
US20110047617A1 true US20110047617A1 (en) 2011-02-24

Family

ID=38005289

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/272,473 Active 2027-05-28 US7831915B2 (en) 2005-11-10 2005-11-10 Dynamically protecting against web resources associated with undesirable activities
US12/939,735 Abandoned US20110047617A1 (en) 2005-11-10 2010-11-04 Protecting against network resources associated with undesirable activities

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/272,473 Active 2027-05-28 US7831915B2 (en) 2005-11-10 2005-11-10 Dynamically protecting against web resources associated with undesirable activities

Country Status (1)

Country Link
US (2) US7831915B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070118898A1 (en) * 2005-11-10 2007-05-24 Microsoft Corporation On demand protection against web resources associated with undesirable activities
US8495218B1 (en) * 2011-01-21 2013-07-23 Google Inc. Managing system resources
CN104113539A (en) * 2014-07-11 2014-10-22 哈尔滨工业大学(威海) Phishing website engine detection method and device

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9384345B2 (en) 2005-05-03 2016-07-05 Mcafee, Inc. Providing alternative web content based on website reputation assessment
US8438499B2 (en) 2005-05-03 2013-05-07 Mcafee, Inc. Indicating website reputations during user interactions
US7562304B2 (en) 2005-05-03 2009-07-14 Mcafee, Inc. Indicating website reputations during website manipulation of user information
US8566726B2 (en) 2005-05-03 2013-10-22 Mcafee, Inc. Indicating website reputations based on website handling of personal information
US7831915B2 (en) * 2005-11-10 2010-11-09 Microsoft Corporation Dynamically protecting against web resources associated with undesirable activities
US8701196B2 (en) 2006-03-31 2014-04-15 Mcafee, Inc. System, method and computer program product for obtaining a reputation associated with a file
US8141132B2 (en) * 2006-08-15 2012-03-20 Symantec Corporation Determining an invalid request
US20080060062A1 (en) * 2006-08-31 2008-03-06 Robert B Lord Methods and systems for preventing information theft
US8904487B2 (en) * 2006-08-31 2014-12-02 Red Hat, Inc. Preventing information theft
US20080244715A1 (en) * 2007-03-27 2008-10-02 Tim Pedone Method and apparatus for detecting and reporting phishing attempts
US7966553B2 (en) * 2007-06-07 2011-06-21 Microsoft Corporation Accessible content reputation lookup
US9378373B2 (en) * 2007-09-24 2016-06-28 Symantec Corporation Software publisher trust extension application
US8220035B1 (en) * 2008-02-29 2012-07-10 Adobe Systems Incorporated System and method for trusted embedded user interface for authentication
US8555078B2 (en) 2008-02-29 2013-10-08 Adobe Systems Incorporated Relying party specifiable format for assertion provider token
US8353016B1 (en) 2008-02-29 2013-01-08 Adobe Systems Incorporated Secure portable store for security skins and authentication information
EP2283446A4 (en) * 2008-04-21 2012-09-05 Sentrybay Ltd Fraudulent page detection
US20100153884A1 (en) * 2008-12-12 2010-06-17 Yahoo! Inc. Enhanced web toolbar
US20100257403A1 (en) * 2009-04-03 2010-10-07 Microsoft Corporation Restoration of a system from a set of full and partial delta system snapshots across a distributed system
US8261126B2 (en) * 2009-04-03 2012-09-04 Microsoft Corporation Bare metal machine recovery from the cloud
US8862574B2 (en) * 2009-04-30 2014-10-14 Microsoft Corporation Providing a search-result filters toolbar
US8862699B2 (en) * 2009-12-14 2014-10-14 Microsoft Corporation Reputation based redirection service
US9336379B2 (en) 2010-08-19 2016-05-10 Microsoft Technology Licensing, Llc Reputation-based safe access user experience
US9083733B2 (en) 2011-08-01 2015-07-14 Visicom Media Inc. Anti-phishing domain advisor and method thereof
US20130046601A1 (en) * 2011-08-16 2013-02-21 Frootful Apps Limited Enhanced product search system and method
CN104580102B (en) * 2013-10-23 2019-03-05 北大方正集团有限公司 A kind of guard method of client-side program and service platform
US9521164B1 (en) * 2014-01-15 2016-12-13 Frank Angiolelli Computerized system and method for detecting fraudulent or malicious enterprises

Citations (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5664099A (en) * 1995-12-28 1997-09-02 Lotus Development Corporation Method and apparatus for establishing a protected channel between a user and a computer system
US6006328A (en) * 1995-07-14 1999-12-21 Christopher N. Drake Computer software authentication, protection, and security system
US20010033296A1 (en) * 2000-01-21 2001-10-25 Fullerton Nathan W. Method and apparatus for delivery and presentation of data
US20010039616A1 (en) * 2000-05-08 2001-11-08 Ids Corporation Carrier-free terminal authentication system by means of a mail-back method
US20020066039A1 (en) * 2000-11-30 2002-05-30 Dent Paul W. Anti-spoofing password protection
US20020124172A1 (en) * 2001-03-05 2002-09-05 Brian Manahan Method and apparatus for signing and validating web pages
US20020147645A1 (en) * 2001-02-02 2002-10-10 Open Tv Service platform suite management system
US20020184491A1 (en) * 2001-06-01 2002-12-05 International Business Machines Corporation Internet authentication with multiple independent certificate authorities
US20030028762A1 (en) * 2001-07-31 2003-02-06 Kevin Trilli Entity authentication in a shared hosting computer network environment
US20030065776A1 (en) * 2001-09-28 2003-04-03 Dale Malik Methods and systems for a communications and information resource manager
US20030097591A1 (en) * 2001-11-20 2003-05-22 Khai Pham System and method for protecting computer users from web sites hosting computer viruses
US6571256B1 (en) * 2000-02-18 2003-05-27 Thekidsconnection.Com, Inc. Method and apparatus for providing pre-screened content
US6609253B1 (en) * 1999-12-30 2003-08-19 Bellsouth Intellectual Property Corporation Method and system for providing interactive media VCR control
US20040003248A1 (en) * 2002-06-26 2004-01-01 Microsoft Corporation Protection of web pages using digital signatures
US6687741B1 (en) * 2000-05-24 2004-02-03 Microsoft Corporation Sending a file as a link and/or as an attachment
US20040060063A1 (en) * 2002-09-24 2004-03-25 Russ Samuel H. PVR channel and PVR IPG information
US6725380B1 (en) * 1999-08-12 2004-04-20 International Business Machines Corporation Selective and multiple programmed settings and passwords for web browser content labels
US20040123157A1 (en) * 2002-12-13 2004-06-24 Wholesecurity, Inc. Method, system, and computer program product for security within a global computer network
US20040128552A1 (en) * 2002-12-31 2004-07-01 Christopher Toomey Techniques for detecting and preventing unintentional disclosures of sensitive data
US20040163117A1 (en) * 2000-06-09 2004-08-19 Rodriguez Arturo A. Media-on-demand filing and reminder system
US20040213273A1 (en) * 2003-04-22 2004-10-28 Kenneth Ma Network attached storage device servicing audiovisual content
US20040236874A1 (en) * 2001-05-17 2004-11-25 Kenneth Largman Computer system architecture and method providing operating-system independent virus-, hacker-, and cyber-terror-immune processing environments
US20040268386A1 (en) * 2002-06-08 2004-12-30 Gotuit Video, Inc. Virtual DVD library
US20050027822A1 (en) * 2003-07-30 2005-02-03 Plaza Manuel Eslick Method and system for providing secondary internet access features by intercepting primary browser window locators
US6874084B1 (en) * 2000-05-02 2005-03-29 International Business Machines Corporation Method and apparatus for establishing a secure communication connection between a java application and secure server
US20050076092A1 (en) * 2003-10-02 2005-04-07 Sony Corporation And Sony Electronics Inc. User shared virtual channel via media storage
US20050086161A1 (en) * 2005-01-06 2005-04-21 Gallant Stephen I. Deterrence of phishing and other identity theft frauds
US20050169467A1 (en) * 2004-02-03 2005-08-04 Hank Risan Method and system for preventing unauthorized recording of media content in an iTunes TM environment
US20050268100A1 (en) * 2002-05-10 2005-12-01 Gasparini Louis A System and method for authenticating entities to users
US20050278729A1 (en) * 1999-04-21 2005-12-15 Interactual Technologies, Inc. Presentation of media content
US20060015722A1 (en) * 2004-07-16 2006-01-19 Geotrust Security systems and services to provide identity and uniform resource identifier verification
US20060025132A1 (en) * 2004-04-16 2006-02-02 Jeyhan Karaoguz Remote configuration and control of local devices via a broadband access gateway
US20060053293A1 (en) * 2004-09-07 2006-03-09 Zager Robert P User interface and anti-phishing functions for an anti-spam micropayments system
US20060075504A1 (en) * 2004-09-22 2006-04-06 Bing Liu Threat protection network
US20060080444A1 (en) * 2004-09-03 2006-04-13 Michael Peddemors System and method for controlling access to a network resource
US20060080437A1 (en) * 2004-10-13 2006-04-13 International Busines Machines Corporation Fake web addresses and hyperlinks
US20060095404A1 (en) * 2004-10-29 2006-05-04 The Go Daddy Group, Inc Presenting search engine results based on domain name related reputation
US20060095955A1 (en) * 2004-11-01 2006-05-04 Vong Jeffrey C V Jurisdiction-wide anti-phishing network service
US20060149813A1 (en) * 1999-03-04 2006-07-06 Simple Devices System and method for providing content, management, and interactivity for client devices
US7082429B2 (en) * 2003-12-10 2006-07-25 National Chiao Tung University Method for web content filtering
US20060173974A1 (en) * 2005-02-02 2006-08-03 Victor Tang System and method for providing mobile access to personal media
US20060218403A1 (en) * 2005-03-23 2006-09-28 Microsoft Corporation Visualization of trust in an address bar
US20060224511A1 (en) * 2005-03-29 2006-10-05 Sbc Knowledge Ventures, Lp Anti-phishing methods based on an aggregate characteristic of computer system logins
US20070006305A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Preventing phishing attacks
US20070055749A1 (en) * 2005-09-06 2007-03-08 Daniel Chien Identifying a network address source for authentication
US7194004B1 (en) * 2002-01-28 2007-03-20 3Com Corporation Method for managing network access
US20070083670A1 (en) * 2005-10-11 2007-04-12 International Business Machines Corporation Method and system for protecting an internet user from fraudulent ip addresses on a dns server
US7216292B1 (en) * 1999-09-01 2007-05-08 Microsoft Corporation System and method for populating forms with previously used data values
US20070107054A1 (en) * 2005-11-10 2007-05-10 Microsoft Corporation Dynamically protecting against web resources associated with undesirable activities
US20070118898A1 (en) * 2005-11-10 2007-05-24 Microsoft Corporation On demand protection against web resources associated with undesirable activities
US7296238B1 (en) * 2000-09-08 2007-11-13 Corel Corporation Method and apparatus for triggering automated processing of data
US20080109852A1 (en) * 2006-10-20 2008-05-08 Kretz Martin H Super share
US20080147735A1 (en) * 2006-12-18 2008-06-19 Microsoft Corporation Media content catalogs
US7457823B2 (en) * 2004-05-02 2008-11-25 Markmonitor Inc. Methods and systems for analyzing data related to possible online fraud
US7606821B2 (en) * 2004-06-30 2009-10-20 Ebay Inc. Method and system for preventing fraudulent activities
US7634810B2 (en) * 2004-12-02 2009-12-15 Microsoft Corporation Phishing detection, prevention, and notification
US7769820B1 (en) * 2005-06-30 2010-08-03 Voltage Security, Inc. Universal resource locator verification services using web site attributes
US7913302B2 (en) * 2004-05-02 2011-03-22 Markmonitor, Inc. Advanced responses to online fraud
US7984304B1 (en) * 2004-03-02 2011-07-19 Vmware, Inc. Dynamic verification of validity of executable code
US8296295B2 (en) * 2006-07-05 2012-10-23 BNA (Llesiant Corporation) Relevance ranked faceted metadata search method
US20120317101A1 (en) * 2006-07-14 2012-12-13 Chacha Search, Inc. Method and system for qualifying keywords in query strings

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2401445A (en) 2003-05-08 2004-11-10 Simon Freeman Web site security model
FR2856596B1 (en) 2003-06-27 2007-04-27 Bioprojet Soc Civ NOVEL PSYCHIATRIC DRUG ASSOCIATION AND THE USE OF AN INVERSE HISTAMINE H3 RECEPTOR ANTAGONIST OR AGONIST TO PREPARE A MEDICAMENT PREVENTING ADVERSE EFFECTS OF PSYCHOTROPES.
EP1683293A4 (en) 2003-11-07 2007-07-25 Rsa Security Inc System and method of addressing email and electronic communication fraud

Patent Citations (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6006328A (en) * 1995-07-14 1999-12-21 Christopher N. Drake Computer software authentication, protection, and security system
US5664099A (en) * 1995-12-28 1997-09-02 Lotus Development Corporation Method and apparatus for establishing a protected channel between a user and a computer system
US20060149813A1 (en) * 1999-03-04 2006-07-06 Simple Devices System and method for providing content, management, and interactivity for client devices
US20050278729A1 (en) * 1999-04-21 2005-12-15 Interactual Technologies, Inc. Presentation of media content
US6725380B1 (en) * 1999-08-12 2004-04-20 International Business Machines Corporation Selective and multiple programmed settings and passwords for web browser content labels
US7216292B1 (en) * 1999-09-01 2007-05-08 Microsoft Corporation System and method for populating forms with previously used data values
US6609253B1 (en) * 1999-12-30 2003-08-19 Bellsouth Intellectual Property Corporation Method and system for providing interactive media VCR control
US20010033296A1 (en) * 2000-01-21 2001-10-25 Fullerton Nathan W. Method and apparatus for delivery and presentation of data
US6571256B1 (en) * 2000-02-18 2003-05-27 Thekidsconnection.Com, Inc. Method and apparatus for providing pre-screened content
US6874084B1 (en) * 2000-05-02 2005-03-29 International Business Machines Corporation Method and apparatus for establishing a secure communication connection between a java application and secure server
US20010039616A1 (en) * 2000-05-08 2001-11-08 Ids Corporation Carrier-free terminal authentication system by means of a mail-back method
US6687741B1 (en) * 2000-05-24 2004-02-03 Microsoft Corporation Sending a file as a link and/or as an attachment
US20040163117A1 (en) * 2000-06-09 2004-08-19 Rodriguez Arturo A. Media-on-demand filing and reminder system
US7296238B1 (en) * 2000-09-08 2007-11-13 Corel Corporation Method and apparatus for triggering automated processing of data
US20020066039A1 (en) * 2000-11-30 2002-05-30 Dent Paul W. Anti-spoofing password protection
US20020147645A1 (en) * 2001-02-02 2002-10-10 Open Tv Service platform suite management system
US20020124172A1 (en) * 2001-03-05 2002-09-05 Brian Manahan Method and apparatus for signing and validating web pages
US20040236874A1 (en) * 2001-05-17 2004-11-25 Kenneth Largman Computer system architecture and method providing operating-system independent virus-, hacker-, and cyber-terror-immune processing environments
US20020184491A1 (en) * 2001-06-01 2002-12-05 International Business Machines Corporation Internet authentication with multiple independent certificate authorities
US20030028762A1 (en) * 2001-07-31 2003-02-06 Kevin Trilli Entity authentication in a shared hosting computer network environment
US20030065776A1 (en) * 2001-09-28 2003-04-03 Dale Malik Methods and systems for a communications and information resource manager
US20030097591A1 (en) * 2001-11-20 2003-05-22 Khai Pham System and method for protecting computer users from web sites hosting computer viruses
US7194004B1 (en) * 2002-01-28 2007-03-20 3Com Corporation Method for managing network access
US20050268100A1 (en) * 2002-05-10 2005-12-01 Gasparini Louis A System and method for authenticating entities to users
US20040268386A1 (en) * 2002-06-08 2004-12-30 Gotuit Video, Inc. Virtual DVD library
US20040003248A1 (en) * 2002-06-26 2004-01-01 Microsoft Corporation Protection of web pages using digital signatures
US20040060063A1 (en) * 2002-09-24 2004-03-25 Russ Samuel H. PVR channel and PVR IPG information
US20040123157A1 (en) * 2002-12-13 2004-06-24 Wholesecurity, Inc. Method, system, and computer program product for security within a global computer network
US20040128552A1 (en) * 2002-12-31 2004-07-01 Christopher Toomey Techniques for detecting and preventing unintentional disclosures of sensitive data
US7152244B2 (en) * 2002-12-31 2006-12-19 American Online, Inc. Techniques for detecting and preventing unintentional disclosures of sensitive data
US20040213273A1 (en) * 2003-04-22 2004-10-28 Kenneth Ma Network attached storage device servicing audiovisual content
US20050027822A1 (en) * 2003-07-30 2005-02-03 Plaza Manuel Eslick Method and system for providing secondary internet access features by intercepting primary browser window locators
US20050076092A1 (en) * 2003-10-02 2005-04-07 Sony Corporation And Sony Electronics Inc. User shared virtual channel via media storage
US7082429B2 (en) * 2003-12-10 2006-07-25 National Chiao Tung University Method for web content filtering
US20050169467A1 (en) * 2004-02-03 2005-08-04 Hank Risan Method and system for preventing unauthorized recording of media content in an iTunes TM environment
US7984304B1 (en) * 2004-03-02 2011-07-19 Vmware, Inc. Dynamic verification of validity of executable code
US20060025132A1 (en) * 2004-04-16 2006-02-02 Jeyhan Karaoguz Remote configuration and control of local devices via a broadband access gateway
US7913302B2 (en) * 2004-05-02 2011-03-22 Markmonitor, Inc. Advanced responses to online fraud
US7457823B2 (en) * 2004-05-02 2008-11-25 Markmonitor Inc. Methods and systems for analyzing data related to possible online fraud
US7606821B2 (en) * 2004-06-30 2009-10-20 Ebay Inc. Method and system for preventing fraudulent activities
US20060015722A1 (en) * 2004-07-16 2006-01-19 Geotrust Security systems and services to provide identity and uniform resource identifier verification
US20060080444A1 (en) * 2004-09-03 2006-04-13 Michael Peddemors System and method for controlling access to a network resource
US20060053293A1 (en) * 2004-09-07 2006-03-09 Zager Robert P User interface and anti-phishing functions for an anti-spam micropayments system
US20060075504A1 (en) * 2004-09-22 2006-04-06 Bing Liu Threat protection network
US20060080437A1 (en) * 2004-10-13 2006-04-13 International Busines Machines Corporation Fake web addresses and hyperlinks
US20060095404A1 (en) * 2004-10-29 2006-05-04 The Go Daddy Group, Inc Presenting search engine results based on domain name related reputation
US20060095955A1 (en) * 2004-11-01 2006-05-04 Vong Jeffrey C V Jurisdiction-wide anti-phishing network service
US7634810B2 (en) * 2004-12-02 2009-12-15 Microsoft Corporation Phishing detection, prevention, and notification
US20050086161A1 (en) * 2005-01-06 2005-04-21 Gallant Stephen I. Deterrence of phishing and other identity theft frauds
US20060173974A1 (en) * 2005-02-02 2006-08-03 Victor Tang System and method for providing mobile access to personal media
US20060218403A1 (en) * 2005-03-23 2006-09-28 Microsoft Corporation Visualization of trust in an address bar
US20060224511A1 (en) * 2005-03-29 2006-10-05 Sbc Knowledge Ventures, Lp Anti-phishing methods based on an aggregate characteristic of computer system logins
US7769820B1 (en) * 2005-06-30 2010-08-03 Voltage Security, Inc. Universal resource locator verification services using web site attributes
US20070006305A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Preventing phishing attacks
US20070055749A1 (en) * 2005-09-06 2007-03-08 Daniel Chien Identifying a network address source for authentication
US20070083670A1 (en) * 2005-10-11 2007-04-12 International Business Machines Corporation Method and system for protecting an internet user from fraudulent ip addresses on a dns server
US7831915B2 (en) * 2005-11-10 2010-11-09 Microsoft Corporation Dynamically protecting against web resources associated with undesirable activities
US20070118898A1 (en) * 2005-11-10 2007-05-24 Microsoft Corporation On demand protection against web resources associated with undesirable activities
US20070107054A1 (en) * 2005-11-10 2007-05-10 Microsoft Corporation Dynamically protecting against web resources associated with undesirable activities
US8353029B2 (en) * 2005-11-10 2013-01-08 Microsoft Corporation On demand protection against web resources associated with undesirable activities
US8296295B2 (en) * 2006-07-05 2012-10-23 BNA (Llesiant Corporation) Relevance ranked faceted metadata search method
US20120317101A1 (en) * 2006-07-14 2012-12-13 Chacha Search, Inc. Method and system for qualifying keywords in query strings
US20080109852A1 (en) * 2006-10-20 2008-05-08 Kretz Martin H Super share
US20080147735A1 (en) * 2006-12-18 2008-06-19 Microsoft Corporation Media content catalogs
US8706777B2 (en) * 2006-12-18 2014-04-22 Microsoft Corporation Media content catalogs
US20140195910A1 (en) * 2006-12-18 2014-07-10 Microsoft Corporation Media content catalogs

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070118898A1 (en) * 2005-11-10 2007-05-24 Microsoft Corporation On demand protection against web resources associated with undesirable activities
US8353029B2 (en) 2005-11-10 2013-01-08 Microsoft Corporation On demand protection against web resources associated with undesirable activities
US8495218B1 (en) * 2011-01-21 2013-07-23 Google Inc. Managing system resources
CN104113539A (en) * 2014-07-11 2014-10-22 哈尔滨工业大学(威海) Phishing website engine detection method and device

Also Published As

Publication number Publication date
US7831915B2 (en) 2010-11-09
US20070107054A1 (en) 2007-05-10

Similar Documents

Publication Publication Date Title
US7831915B2 (en) Dynamically protecting against web resources associated with undesirable activities
US8353029B2 (en) On demand protection against web resources associated with undesirable activities
US10243991B2 (en) Methods and systems for generating dashboards for displaying threat insight information
US10027708B2 (en) Login failure sequence for detecting phishing
EP2859495B1 (en) Malicious message detection and processing
US8112799B1 (en) Method, system, and computer program product for avoiding cross-site scripting attacks
US7617532B1 (en) Protection of sensitive data from malicious e-mail
US9262638B2 (en) Hygiene based computer security
US9325731B2 (en) Identification of and countermeasures against forged websites
US20120151559A1 (en) Threat Detection in a Data Processing System
US8205260B2 (en) Detection of window replacement by a malicious software program
US20130036468A1 (en) Anti-phishing domain advisor and method thereof
JP2012528528A (en) Managing potentially phishing messages in the context of non-webmail clients
US20100083383A1 (en) Phishing shield
US8601574B2 (en) Anti-phishing methods based on an aggregate characteristic of computer system logins
Spett Cross-site scripting
EP3195140B1 (en) Malicious message detection and processing
JP4564916B2 (en) Phishing fraud countermeasure method, terminal, server and program
Jakobsson The rising threat of launchpad attacks
Chuchuen et al. Relationship between phishing techniques and user personality model of Bangkok Internet users
Aneke et al. Towards determining cybercrime technology evolution in Nigeria
Gan et al. Phishing: a growing challenge for Internet banking providers in Malaysia
Flinn et al. Omnivore: Risk management through bidirectional transparency
Escoses et al. Phisherman: Phishing Link Scanner
Escoses et al. Neil C. Enriquez², and Marlon A. Diloy¹, 2 (—), 1 National University, Manila, Philippines madiloy@ national-u. edu. ph 2 NU Laguna, Calamba, Philippines

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0001

Effective date: 20141014

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION